Sybex - Maya - Secrets of The Pros - 2003 (PDF)
Sybex - Maya - Secrets of The Pros - 2003 (PDF)
Robin Akin and Remington Scott of Weta Digital Keep your characters from mouthing off as Dariush
combine motion-capture data with keyframed anima- Derakhshani of Sight Effects and John Kundert-Gibbs
tion in three powerful tutorials to help you master the and Rebecca Johnson of Clemson University show
integration of these two types of animation. Page 85 some economical and flexible methods for matching
your model's lip motion to your audio track.
Page 119
in
iv About the Authors
Susanne Werth
Frank Ritlop
Susanne Werth got into animation in Mexico in 1997.
Frank Ritlop is a Canadian living in New Zealand. He
She graduated from Fachhochschule Furtwangen in
has been working in the CG industry for more than
computer science and media and started out as a char-
nine years and has a degree in film animation from
acter animator with a children's TV series in Potsdam,
Concordia University in Montreal. He has worked as a
Germany. After a period of time working as layout
lighting supervisor for CG studios in Montreal and
artist and layout supervisor, she went back to anima-
Munich and as a lighting artist at Square USA on Final
tion and stepped forward into MEL scripting. She cur-
Fantasy: The Spirits Within. He is currently working at
rently works as character animator and MEL pro-
Weta Digital as a Shot Technical Director on the sec-
grammer at a children's TV production company in
ond installment of the Lord of the Rings trilogy.
Germany.
Remington Scott
Habib Zargarpour
Remington Scott is currently overseeing motion cap-
Habib Zargarpour is Associate Visual Effects Supervi-
ture production at Weta Digital for The Lord of the
sor at Industrial Light & Magic. He is a recipient of
Rings: The Two Towers and was the Director of
the British Academy Award for Best Achievement in
Motion Capture at Square Pictures for Final Fantasy:
Visual Effects, as well as being nominated for an
The Spirits Within and The Matrix: Flight of the
Academy Award for Best Achievement in Visual
Osiris. He has professionally created digital anima-
Effects for his work on both The Perfect Storm and
tions for 16 years. During this time, he was the Inter-
Twister. His other credits include the upcoming Signs,
active Director at Acclaim Entertainment for the
Star Wars: Episode I The Phantom Menace, Spawn,
multi-platinum selling Turok: Dinosaur Hunter, and
Star Trek: First Contact,Jumanji, Star Trek: Genera-
he also co-created the first digitized home video game
tions, and The Mask. Habib joined ILM in the early
in 1986, WWF: Superstars ofWrestling.
'90s, after working as a graphic artist and fine arts
illustrator since 1981. He received his B.A.S.C. in
mechanical engineering from the University of British
vi About the Authors
three Organix—Modeling by
Procedural Means 51
MarkJennings Smith
Stop and Smell the Roses: A Primer 52
A Visual Maya Zoo 54
Bring Simple Life to a Creation 56
Abusing the Cone 59
Alternate Uses of Paint Effects 65
Organix and Paint Effects 68
Summary 81
VII
viii Contents
Part Two: Putting Things in Motion: Animation walks you through the steps of creating one yourself.
When you finish, you'll have a working template into
In Chapter 4, Robin Akin and Remington Scott cover
which you can fit just about any character to speed up
the fascinating art and science of combining motion-
your (or someone else's) animation task.
captured data from live actors with keyframed anima-
tion dreamed up by the animator—you—sitting in a
Part Four: Endings: Surfacing and Rendering
studio. Using high-quality motion capture data and a
powerful animation setup, the authors walk you (liter- In Chapter 10, Frank Ritlop shows you how the pros
ally!) through the process of combining these two do lighting. Not satisfied with a few lights shining on
types of animation. his example scene, he meticulously demonstrates the
Chapter 5 covers the exacting art of lip-synching layers of lights that need to be built up to form the
mouths with pre-recorded audio tracks. Using exam- beautiful finished product. Once you've worked
ples from both cartoon and photo-real cases, Dariush through this chapter, you'll have the understanding
Derakhshani, John Kundert-Gibbs, and Rebecca John- and confidence to create your own complex and beau-
son present the most efficient and flexible methods for tifully lit scenes.
matching your model's lip motion to your audio track, Chapter 11 is all about getting the most out of
and they even provide tips on getting the best sound your rendering machines, whether they are dedicated
recording you can for your work. render farm boxes or are being shared by users and
render tasks. Timothy A. Davis and Jacob Richards
Part Three: The Numbers Game: Dealing with explain the general theory of running large render jobs
Complex Systems in parallel and then take you on a tour of their own
distributed renderer—included on the CD—which you
In Chapter 6, Emanuele D'Arrigo shows you how to
can use to speed up your rendering tasks.
create an entire crowd of "Zoids" from a single base
model and then how to get them all to do "the wave"
on cue! In this chapter, you will learn how to vary the About the CD
size, shape, and color of your base model and how to
The CD-ROM that accompanies this book is packed
introduce random variety to your characters' actions,
with useful material that will help you master Maya
producing believable crowd motion in no time flat.
for yourself. In addition to a working demo of xFrog
In Chapter 7, John Kundert-Gibbs and Robert
(a 3D modeling package that creates some very cool
Helms tackle the problem of taming Maya's built-in
organic models), we have scene files, animations, and
dynamics engine to produce specific, controllable
even source code relating to the chapters in the book.
effects for specific animation needs. Whether you need
Some CD highlights are:
tight control over a stylized reality or to get a complex
effect to look as real as "the real thing," this chapter • Models, textures, and animation of a flour sack
will provide insight into getting the job done on • A completed subdivision surface horse, plus ani-
budget and on time. mation of it walking
Chapter 8 is all about particles, particles, and • "Organix" scene files and animations (see Chap-
more particles! In this chapter, Habib Zargarpour ter3)
takes you on a backstage tour of the process used to • Motion-capture data and a pre-rigged skeleton
create the massively powerful and realistic wave mist to place it on
from The Perfect Storm. Here, from one of the true • Models and animation demonstrating various
masters of the insanely complex, are the secrets to lip-synching techniques
working with huge datasets of particles to create • Models and MEL code for generating crowd
photo-real production work. scenes, plus animation of this crowd doing "the
If you've ever wanted to create a simple graphi- wave"
cal control for your animated characters, Chapter 9 is • Models demonstrating rigid body solutions to a
for you. First Susanne Werth helps you refine your billiards break and shattering glass, plus anima-
goals in creating a GUI animation control; then she tion of various stages of work on these scenes
Introduction xiii
• A simplified version of the "crest mist" generat- Technical Director, or someone who loves all the phases
ing scene used in The Perfect Storm of 3D production, there is a delicious dish somewhere
• MEL code for generating character animation in this book for you. And whether you proceed from
GUI controls first to last course or pick and choose your meals, there
• Scenes with models and textures for use in prac- will be something in Maya: Secrets of the Pros to satisfy
ticing lighting technique your appetite for months if not years to come.
• A completely functional Windows-based distrib- What has become clear to all of us who worked
uted rendering solution, plus source code for it on this book is that, no matter how long you have
worked with Maya or in 3D graphics, there is always
As you can see from this list, rather than having
more to learn, and that the joy of learning is half the
to create entire scenes from scratch for each chapter,
fun of working in this profession. We also have been
the accompanying scenes and animations get you
inspired and amazed by one another's work. Finally, we
started and help guide you to appropriate solutions
have distilled in our little enclave one of the greatest
quickly and efficiently. Additionally, after you go
aspects of working in the graphics community: there is
through a chapter once, you can grab a sample scene
an openness, generosity, and willingness to share in this
file or bit of code and play with it, finding your own
community that is simply astounding. As you sample
unique and wonderful solutions to the challenges pre-
the chapters in this book, recall that someone is serving
sented in each chapter.
you years of hard-won experience with every page you
read and every exercise you work through. All you
Staying Connected have to do is pull up a chair and dig in.
Working on these pages has been a reinvigorat-
To stay up-to-date on the Maya: Secrets of the Pros,
ing experience. All of us have felt again the joy of dis-
please go to the book's page at w w w . s y b e x . c o m . If you
covery and the equally wonderful joy of helping oth-
have any questions or feedback, please contact John
ers who appreciate our unique talents. We trust you
Kundert-Gibbs at j k u n d e r t @ c s . c l e m s o n . e d u .
will feel our heartfelt love of what we do, and in shar-
ing it with you, on any given page. And don't forget to
Sharing Our Experiences share your passion with others!
We have had great pleasure in preparing this
As you can see, the subjects covered in this book unveil
eleven-course meal, and invite you to partake of the
nearly all Maya has to offer and likewise most areas of
feast we have set out for you!
3D graphics. Often these authors will reveal little
known secrets or point out ways to perform a task that —John Kundert-Gibbs
make that job much quicker than you might have imag- April 28, 2002
ined. Whether you're an animator, a surfacing artist, a
Accelerating the Preproduction
Process
Eric Kunzendorf
video, and feature film studios all use this methodology to varying degrees. But these are
examples of team-based production; individual animation producers/directors, such as those
trying to produce an animation for use on their demo reels, are responsible for each phase
themselves. Spending too much time on any one phase or group of phases can be fatal to the
entire process. As an animator, I tend to focus on getting to the animation phase as quickly
as possible, while still having an appealing and easy-to-animate character. My goal in this
chapter is to show you how to speed up these early phases of the CG (computer graphics)
animation process.
Becoming skilled at computer animation involves negotiating the dichotomy of learn-
ing the technical processes of the discipline while attempting to master the art forms of qual-
ity character animation. Complicating this process is the complexity of character animation
itself, the enormity of Maya (or other high-end 3D packages) as a piece of software, and the
time constraints inherent in any animation project. Nobody has an unlimited amount of time
for work (and making peace with that fact is a superb first lesson), so operating within a
given amount of time is an inevitable constraint. Coming to grips with the limited resource
of time and comparing it with the enormous volume of work involved makes you realize
how daunting the challenge of creating quality CG animation really is—especially in a solo
or small group project!
"It's [baseball] supposed to be hard. I fit wasn't bard, everyone would do it. The
'hard' is what makes it great."—}immy Dugan (Tom Hanks) in A League of
Their Own
Computer animation is like baseball, but with the added difficulty of being perceived
as easy to accomplish. Unfortunately, the rah-rah making of documentaries, feature-laden
advertisements, and skilled demonstrations by power-using industry professionals does noth-
ing to dispel this idea. These facts, coupled with the intrinsic coolness of creating animated
creatures and worlds, have lead thousands of individuals to try their hand at animation.
Lured by obsolete tales of Hollywood discovering, hiring, and training people off the street
and egged on by the desire to see their creations come to life, people often work diligently to
learn the software but then become puzzled and, in some cases, discouraged when their skills
are not marketable at their desired employment level. This is especially true of character ani-
mation. Character animation is hard, and that "hard" truly does make it great! If it were
easy, everybody would do it well.
Animating well requires that you focus on what you want to accomplish as precisely as
possible before sitting down at the computer. The first three phases of the animation pro-
cess—scriptwriting, thumbnailing, and drawing—are often called preproduction. They are
key because you have precious little time to complete an animation, and the planning, far
from increasing production time, actually shortens it.
Disney all followed this idea in every critically and financially successful animated movie
they produced. They come up with interesting stories and then expend enormous amounts of
R&D, artistic effort, and technical resources to produce their movies. Game developers
operate on much the same principle.
Scriptwriting is an art form unto itself. The intricacies of the three-act narrative and
beyond are way beyond the scope of this short chapter. So, recognizing that a larger story
will provide a narrative framework in which the discrete actions of the character carry that
story, I'll confine my discussion of "script" to individual actions within that story. This
process works well when you are working to create a demo reel, which can be made up of
individual animation exercises as well as larger, more complex narratives.
Although having no plan is bad enough, having a plan that is too general is arguably
worse. One of my most difficult tasks in guiding animation students is paring down their
ideas to manageable levels. Their troubles begin when they present me with ideas such as the
following:
• "I want my character to fight."
• "I want my character to show emotions."
• "I want to do a single skin character." Why? "Because it's a test of my modeling
skills."
• And my all-time favorite: "I want to create a realistic character."
And these are only a few. These goals, while noble, are much too general. Hidden in
these simple statements are vast amounts of work that can rapidly overwhelm an individual.
For example, the "I want my character to fight" story idea lacks some important descriptive
work. Unless the character is going to shadow box, the animator will need to model, texture,
and rig another character. So the animator has to do twice the work. Next is the question of
what type of fighting the characters will use. Will it be American-style boxing, karate-do,
wushu, or drunken monkey style kung fu? Will it be kicks, punches, throws, or grappling?
Every animator has some desire to create Matrix-style, "wall-walking" fight scenes, but
many fail to realize that each move within those fight scenes is extremely well thought out
and choreographed. In animation, you may need to build and rig the character in several
specific ways to accommodate those actions.
The cure for such over-generalization is a detailed description of every action. There-
fore, an acceptable beginning for each of these ideas in the earlier list would go something
like this:
• "My character will be required to throw high, low, and middle-height snap and round-
house kicks at targets to the front, the side, and the back. He will also need to punch
and block with fists, open hands, and elbows."
• "My character will need to happily say the phrase 'the rain in Spain falls mainly in the
plane' while winking leeringly off camera and sneaking cautiously off stage."
• "My character is a superhero in a red, skin-tight body suit. He will wear skin-tight yel-
low gloves that rise to mid-forearm and shoes that rise his shin. He will wear tight blue
Speedo-style shorts and have a big blue T emblazoned on his chest. His body suit will
extend all over his head Spiderman style with only cutouts for eyes. He will need to
run, jump, fly, kick, and punch."
• "I want my character to appear naturalistic enough to engage the audience. I will con-
centrate on making her eyes, face, and hair convincing so as to be better able to con-
vince the audience of the emotions she will have to display."
6 C H A P T E R 1 • Accelerating the Preproduction Process
Although these statements are substantial improvements over the first set of ideas,
these statements are only the beginnings of practical concepts. The next step is to describe in
detail what the character will actually do within the context of the story. You need to plan
each gesture, action, and expression before you begin modeling. By describing each action in
meticulous detail, you'll get a good feel for exactly what you will need to animate, and unre-
alistic expectations will evaporate under the hot light of thorough examination.
Concurrently with writing the story for the character, you must draw constantly. Visu-
alizing both the appearance of the character and the performance that character will give is
essential to completing a successful animation. At this point, the type of preproduction
depends on your individual skills. If you are a strong draftsperson, you should spend more
time drawing than writing; if you're a weak draftsperson, you can spend more time writing if
you are more comfortable doing so. These drawings need not be highly detailed. In fact, you
can indicate motions with simple, not-much-more-than-stick figures drawn rapidly (see Fig-
ure 1.1). The quantity of drawings is what is important at this stage; drawing the character
accurately is a secondary consideration. Furthermore, you need to fully describe each motion
in the script. A kick should have at least four drawings: a ready pose, a raised leg pose, a
fully extended kick pose, and a recovery pose (see Figure 1.2). Keep in mind that these draw-
ings help when you begin animating.
But wait! We haven't decided what the character
even looks like, so how can we decide what it will do?
This vagueness is intentional because when possible,
the final design of your character is determined by
what you, the animator, plan to do with it. Remember
that every facet of the animation serves the story you
plan to tell, and you relate that story primarily
through the actions of the character. Naturally the
appearance of that character is secondary to the story
itself. For example, it is counterproductive to expect
that an obese character can bend over, touch its toes,
and vault into a forward flip (this is not impossible,
but the modeling and rigging chores would be a
technical director's time-consuming nightmare!). If
Figure 1.1: Some gesture drawings of rapidly drawn figures your goal is to create a believable performance, the
model/character should help to achieve that goal. Conversely, you might already have a char-
acter pre-built or preconceived. In such cases, it is up to you to find appropriate actions for
that character. Furthermore, animators working in an effects company may have this deci-
sion taken out of their hands entirely because they have to animate someone else's character;
the animator then has no choice but to work with what they are given. My point is that ani-
mators should do everything possible to enhance their ability to create a believable perform-
ance, and to the extent the animator has control, the design of the character should not make
that goal more difficult.
Drawing is the beginning of modeling, so the beginning of modeling doesn't happen in
Maya, but on paper. Here, there is no substitute for drawing ability. Having determined
what your character can do, draw the character. If you are not proficient in drawing, you
will have to either muddle through or hire someone who is proficient. Ideally, the final prod-
uct of this phase is a fully posed "character" drawing that reveals the personality of the
model (see Figure 1.3). You use this drawing as reference/inspiration later when animating.
More important, you need to produce a front and side schematic drawing of the character.
The details of these drawings should match in both views. Graph paper is useful for lining up
details (see Figure 1.4).
Modeling Methods
Whereas scripting, thumbnailing, and drawing provide the conceptual basis for your work,
the model provides the visual expression of your vision. Producing the model and rigging it
for animation are the most time-consuming and laborious phases of any animation. You
must carefully choose the method you use to create your characters: this choice has dramatic
implications later in terms of setup, animation, texturing, and rendering.
The mistaken tendency of most people is to believe that modeling options are simply
divided between polygons and NURBS surfaces. This is a harmful oversimplification, espe-
cially given the advent of Subdivision surfaces available with Maya Unlimited. Rather, a
more useful discussion centers around the degree to which a character breaks down into sep-
arate objects. Single skin vs. separate objects becomes the real question, and it is a visual
rather than a technical problem.
The great mistake that many beginning independent animators make is trying to force
the idea of animating a "single skin" character. They tend not to realize that few, if any, char-
acters that they see either on TV or in movies are all one piece of geometry. Most, if not all,
are segmented geometry; the seams are cleverly hidden by another piece of geometry or by the
border between textures. This isn't noticeable because the character's geometry is so well ani-
mated that we are convinced of the gestalt of the character. We don't see him, her, or it as sep-
arate pieces; we see the character Woody, Buzz, Shrek, or Aki. Thus, the central point in
modeling for animation is that a well-animated piece of geometry always transcends the mod-
eling method used to create it. Consequently, you want to use the simplest modeling method
necessary to create a character to animate.
Four basic modeling methods are available in Maya:
• NURBS patches
• Trimmed and blended NURBs objects
• Polygons
• SubDivision surfaces
Each of these methods has its own advantages and disadvantages when modeling char-
acters for animation.
NURBS Patches
Using NURBS (Non-Uniform Rational B-Spline) is the classic way to model organic shapes
in Maya. NURBS consist of four-sided patches that are often perplexing to artists who are
new to Maya. At present, you cannot directly manipulate a NURBS surface; modeling occurs
when you manipulate hulls and control vertices (CVs). Manipulating patches requires that
you pay careful attention to surface UV parameterization. Edge and global stitching osten-
sibly ensure smooth joins between patches; unfortunately, keeping edges joined through ani-
mation can be difficult. Also, without a 3D paint program (such as Deep Paint), texturing
NURBS with complex image maps can be problematic. Nevertheless, NURBS surfaces are
efficient; so they animate quickly; and properly built, they provide a highly organized surface
that you can easily convert to polygons.
hidden areas are called trims. You can connect these holes relatively smoothly to other
objects with blends. Blends are pieces of geometry that are interactively generated between
two curves or isoparms. As each curve moves, the blend follows. Resembling lofts, these
curves also maintain a certain amount of tangency with their generating surfaces. Because
there are fewer surfaces overall, trimmed and blended surfaces are generally easier to texture
map. Unfortunately, when activated, they slow animation unacceptably and are unusable for
interactive performance.
You can now set the NodeState attribute to Blocking. By connecting the NodeState attribute to
a custom integer attribute on one of your animation controls using Set Driven Key, you can control
the display of all your blends at once. You can do the same for your trims by selecting the trimmed
object, clicking the Trlrrm tab, loading that into the Channel box. and displaying the trim's
NodeState attribute. However this time, set the NodeState attribute to Has No Effect, which basi-
cally turns off its evaluation, thus speeding up the interactive performance of your animation
controls.
Polygons
Polygon models are the oldest form of 3D surface, and most renderers (Maya's included) tes-
sellate NURBS surfaces into polygons at render time. For a short time, polygon models fell
out of favor with character animators because of their flat, faceted nature, but with the
advent of SubDivision surfaces, polygonal modeling techniques are becoming more popular.
Polygons are 3-, 4-, or n-sided shapes described by vertex position data. Connected, they
form polygonal objects that can be shaded, textured, lit, and rendered. Polygonal modeling
paradigms most closely resemble conventional sculpture, with figures being roughed out in
low polygon mode and smoothed toward the completion of modeling. In Maya 3 and later,
the polygon manipulation tools and animation performance have improved dramatically.
Texture mapping polygon characters requires that you create UV maps and can rapidly
become a tedious chore, but with skill, texture maps can make an otherwise unacceptably
low polygon model into a visually interesting subject.
SubDivision Surfaces
These exciting polygonally derived surfaces are the newest type of modeling paradigm. They
offer the animation advantages of low polygon surfaces combined with the smoothness of
highly tessellated NURBS surfaces, thus combining the ease of use of polygons with the
10 C H A P T E R 1 • Accelerating the Preproduction Process
Figure J.5: A flour sack modeled using each method. From left to right: SubDivision surfaces,
NURBS patches, polygons, and NURBS trims and blends.
organic quality of NURBS surfaces. They are expensive in terms of memory usage, but given
the drop in the price of memory, this is no longer a problem. However, SubDivision surfaces
are currently available only with Maya Unlimited (requiring a significantly greater expense
than Maya Complete), and they are currently incompatible with Fur and Paint effects.
Figure 1.5 shows a flour sack that was modeled using each of the modeling methods.
instant, controllable smoothing; and surface attachability and detachability. Using the file
floursackdismembered . m a on the CD, let's begin.
One problem with patches is aligning the surfaces properly so that they join as
smoothly as possible. Stitching edges and global stitching help a lot, but that won't make a
perfectly aligned "silk purse" out of these completely incongruous "sows' ears" that we have
now. I use the following method:
1. Replace end caps
2. Fillet blend between patch isoparms
3. Delete history on fillets
4. Rebuild fillets geometry
5. Attach fillet geometry to appendage patches
6. Global stitch
In the steps in the following section, I start from the hotbox whenever I choose a menu
command. Also, because you'll want to see the wireframe while working, choose Shading
Shade Options Wireframe on Shaded if it is not already visible.
The steps in the following sections assume some knowledge of NURBS modeling.
Although we will model the flour sack in this case, these techniques apply to cre-
ating any patch model.
I created f l o u r s a c k d i s m e m b e r e d . m a by massaging
NURBS spheres into shape for the body, the "hands,"
and the "feet." Figure 1.7 shows the settings I use
when rebuilding after detaching surfaces. I detach and
rebuild immediately, because it is sometimes easy to
lose track when working with a patch model. All these
surfaces have been rebuilt/reparameterized and are
ready for action.
their end points and are ready to have the Boundary Surfaces tool applied to them to
create our patch.
3. Delete the triangular end patches, as shown in the center image in Figure 1.8.
4. Shift+select the curves in clockwise order, and choose Surfaces Boundary In the
option box, go to the Edit menu, reset settings, and click the Boundary button. Press
the 3 key to smooth the display of the object, and you should have what is depicted in
the image on the right in Figure 1.8. Deselect Surfaces in the Select by Object Type but-
ton bar, and marquee select to select the curves and delete them. Now replace the end
caps for the other side.
5. Save your work.
Although these surfaces are flat now, they will blend in with the rest of the flour sack nicely
when we stitch them later.
Figure 1.10: Editing CVs while the blends automatically remain tangent
There is an ugly bend in the blend resulting from misaligned CVs in the bottom patch. To fix
that, I grabbed the second to the last hull on the bottom hand patch as well as the last two
CVs on the same hull on the adjacent patches (this makes sure that we don't pull our hand
patches out of alignment) and pulled it out along the X and Y axes; the blend matched
tangency.
Global Stitch
Global stitching allows you to join patches together seamlessly over an entire character or
object. Follow these steps:
1. To see Global Stitch in action, it is best to deactivate Wireframe on Shaded. Choose
Shading Wireframe on Shaded.
2. Now, after you delete history on all the patches, select all the patches and choose Edit
NURBS Stitch Global Stitch This step is tricky, but well worth the time to get
right. Global stitching is cool. Not only does it automatically align all the patches, it is
also easy to adjust because it is a single node applied to all the patches.
Figure 1.13 shows the option window with the appropriate settings. By adjusting
Stitch Smoothness, Max Separation, and Modification Resistance, you can create a smooth-
surfaced sack. The big question is, do you select Normals or Tangents? Tangents will do
everything possible to try to smooth between patches, even if it misaligns isoparms and CVs.
If you can make Tangents work, select Tangents. Normals smooths the surface, but not as
well. So let's set Stitch Smoothness to Tangents and click the Global Stitch button.
3. Deselect all the patches. Ugh! It looks as if the
Modeling Methods 15 =
top of the sack is curdled! We need to fix this.
4. Select a patch as shown in the image on the left
in Figure 1.14, and click globalStitch1 in the
INPUTS section in the Channel box. You can
adjust these settings to change the entire sack,
which is convenient.
5. Click the Modification Resistance attribute
name in the Channel box, and MM click, hold,
and drag back and forth in the modeling win-
dow to adjust the attribute's value. Hold down
the Ctrl key to increase the "fineness" of the
adjustment if necessary You want this number to
be as low as possible and still get a smooth stitch
because this number controls the amount of
Figure 1.13: The proper GlobalStitch settings
resistance the CVs show to the global stitch.
Global Stitch is a powerful, but stupid process.
When Stitch Smooth is set on Tangents, it moves CVs wherever it wants to set up its
version of what a smooth stitch requires, even if the surface buckles unusably. After
setting the Modification Resistance attribute to a setting that gives us the smooth sur-
face you want, you're done.
6. As a last step, select all the patches, and choose Edit Delete by Type History.
Patches are too far apart lf the patches start too far apart, there can be major problems as it
appears that Mayaw!ll attempt to join not only the separate surfaces but also join and align CVs
that are on the same patch if they are within the Max Separation distance of the stitch. It is best to
start with patches that touch or are very close and that are already aligned. Accomplish this by
edge stitching first and then deleting history.
Misparameterized surfaces Trying to join surfaces that don't have the same number of U spans
and/or V spans can be problematic. Make sure that all surface isoparms match both numerically
and visually.
Surface discontinuities The image in the upper left in the following graphic shows a surface dis-
continuitywhere five patches meet. Correcting this involves individually shift-marquee selecting the
five corner and five pairs of tangency points where the patches meet (you want to make sure you
get the multiple points, as shown in the image in the upper right). Shift+select the five points in
between, as shown in the image in the lower left. Now, run ElieJamaa's fine Planarize plug-in
(available on the accompanying
CD-ROM). That should fix the prob-
lem, and the image will appear like
that in the lower right of the follow-
ing graphic.
For those pesky, hard-to-adjust
discontinuities that resist automated
fixes, double-click the
Translate/Move tool and select Nor-
mal mode. In this mode, you can
push and pull CVs in the U and V
directions. You can also drag them
outward along the surface normal.
Using Normal mode is the best way
to align CVs manually. Pull the CVs
perpendicular to the seam to align
the patches. Work with CVs and
Hull visible for best results.
Once you correct any problems and have a smooth sack, you are ready for the next step,
texturing.
Figure 1.16: Left image: A NURBS flour sack textured by using built in UVs. Right image: A
medium resolution polygon flour sack textured with burlap applied with custom UV maps.
flour sack textured with the same burlap texture mapped with a custom UV map (on the
right).
Speeding the texture-mapping step is one of the biggest benefits of proper planning
prior to modeling. When you predefine the overall look of the animation, texture mapping
becomes easier. Achieving good textures on your subject is difficult enough without trying to
define them while you work on the technical part of mapping. Every aspect of Maya gives
you an enormous number of choices; it is extremely easy to get lost in the many ways of cre-
ating textures. If for no other reason than this, starting this process with a clear idea of what
you want and learning how to obtain it is crucial.
It is also important to keep a consistent textural style throughout an animation. If a
character is textured naturalistically, it becomes the standard by which the rest of the textures
in the piece are judged. Much like a drawing in which one part is worked to a high finish, the
polished main character will look incongruous with the backgrounds if the rest of the textures
are haphazardly thrown into the piece. Although it is entirely possible to play complex tex-
tures off simple textures to great effect, such contrasts do not often happen by accident. A
consistent level of thought makes such composition devices work in an animation.
NURBS surfaces and polygons require different methods of texture mapping. Each
NURBS surface, whether a separate shape or a patch, has an implicit U and V direction and
is square in shape. All textures applied to NURBS surfaces stretch to fit that shape. Although
18 C H A P T E R 1 • Accelerating the Preproduction Process
this feature makes NURBS shapes easy to map, it makes multiple patches difficult to map
consistently over the entire surface of your model. For all practical purposes, mapping multi-
patch models with images fitting across their surfaces requires a 3D paint program specifi-
cally designed for that purpose. As such, texturing such surfaces is beyond the purview of
this chapter. Polygons provide a much better example that can be textured from within
Maya (and Photoshop, of course).
Now let's convert the NURBS surfaces to a polygonal mesh. Follow these steps:
1. Converting the NURBS sack to polygons is simply a matter of using Modify Con-
vert NURBS to Polys using Quads, General Tesselation Method, and a setting of 1
for the U and V direction under Per Span # of Iso Params.
2. Click Tesselate to create a polygon flour sack.
3. Choose Edit Delete by Type History.
4. In the UV Texture Editor window, delete the UVs that were brought over from the
NURBS to polygon conversion and remap with your own map.
I prepared two files that
contain UV maps for this polygo-
nal flour s a c k . S a c k U V o n e . m a and
s a c k U V t w o . m a (available on the
CD) present two different methods
for arranging the UVs for this
sack. I created them by planar pro-
jecting the front and back polygons
from the front camera view. I
flipped the back UVs, moved them
beside the front UVs, and moved
them both out of the way in prepa-
ration for the next group of poly-
gons to be mapped. I projected the
top and side polygons in the same
way. I created the UV map for the
hands and feet polygons using
automatic mapping, and I sewed
them together to create the maps
found in the files. These two UV
maps differ only in the way the
polygons are sewn together to cre-
ate the fabric of the sack.
When creating UV maps, be
sure that adjoining edges are the
same size. They are the same edge,
but they exist at two different
coordinates in the flat UV map. If
one edge is longer than the other,
there will be an irreconcilable seam
where the two disparate edges
meet in the model, as shown in Fig-
ure 1.17.
Setting Up for Animation 19
Think about where these seams should line up. This flour sack will be made of some
kind of cloth or paper (in this case, burlap), so it will be tremendously advantageous to think
of these patches as patches of cloth that will be sewn together and then filled with flour. Let-
ting that thought guide me, I created s a c k U V t w o . m a and used it to create the textures. I select
the flour sack, open the UVTexture Editor window, and choose Polygon UV Snapshot.
Now I can export the image in the texture window as an image to import into Photoshop. I
recommend a size of 1024 on U and V. Turn off anti-aliasing, and make the line color white.
I recommend Targa or Maya iff for the export; Targa
works in Photoshop with no extra plug-ins.
Figure 1.18 shows my Photoshop environment. I
floated the lines from the UV snapshot by painting the
alpha channel in white onto a transparent layer. I then
lassoed a section of burlap and dragged it into a layer
sandwiched between the white line layer and the back-
ground. As I complete each piece of burlap, I merge it
down onto the background. I use Photoshop's
Rubberstamp tool to remove any seams where I have
pasted two pieces of burlap that don't quite match
naturally. I allow the texture to overlap the edge of the
UV patch because I will almost surely have to tweak
the poly UVs after I apply the texture and smooth the
polygons. Figure 1.18: The Photoshop environment
I have included a flat burlap square seamless tex-
ture on the CD ( b u r 1 a p s q . t g a ) . I created it by scanning a piece of burlap and making it
seamless using Photoshop's Clone tool. I have also included the b u r l a p c U V . t g a file that is
the finished UV mapped color texture created earlier in this chapter. It applies seamlessly
to the sack in s a c k U V t w o . m a . From this file, you can derive a bump map by either connecting
it to the bump channel in the Hypershade window or editing it in Photoshop to create a
more custom look. I also included a file called C o l o r M a p . t g a , which corresponds to the map
in s a c k U V o n e . m a . Use it as a guide to mapping that file.
With texturing done, it is time to move to setup. However, it isn't necessary to wait
until texturing is complete to move on. If you want to, you can set up the original and create
a UV map of a duplicate copy. Then, when finished, you can choose Polygon Transfer to
copy the UV map to the setup original.
What's that you say? You are an animator! That may be true, but technical directors
(TDs) usually handle character setup, lf you are doing all the work for your produc-
tion, congratulations! You are now a TD!
20 C H A P T E R 1 • Accelerating the Preproduction Process
Figure 1.20: These drawings also show extreme poses, but with the added dimension of form. I
scanned some ballpoint pen gestural thumbnail sketches into the computer at a high resolution,
resized them, and printed them lightly on toned paper. I then reworked them with white and black
colored pencil and outlined them in black ink. The idea behind this method is to add the illusion of
form and volume without losing the gestural quality of the sketch.
character animation, I want to stay focused on simpler control, which brings us to our next
principle.
Creating an Arm
Now let's put some of these principles in action using a well-muscled human arm. We'll
work first on the wrist. I use a system of NURBS boxes (created with C o n t r o l B o x . m e l on the
CD and shaped by translating CVs) as controllers for the body bones (using FK), arms and
legs (using IK), facial expressions (using SDK—set driven key—blendshapes), and fingers
(using SDK). I find that using point constraints for the arm IK combined with an orient con-
straint for the wrist bone allows the hands to stay locked in place when the body moves. The
problem is that connecting the elbow to the wrist directly causes the mesh around the wrist
to collapse when the wrist rotates around the X axis too far. Correcting this deformation is
difficult; weight painting can only take you so far. An elbow, ulna, and wrist arrangement
provides a way to transfer the x-rotations of the wrist to the short ulna providing a way for a
lattice to correct the collapsing forearm while allowing the wrist to rotate freely in the Y and
Z axes. What we'll do is bind the lattice to the elbow and ulna while constraining the rota-
tion of the wrist to the Wrist control box.
Creating an Arm 23
The steps In the following section assume that you know how to use Smooth
Bind and paint weights to create relatively smooth joint deformations.
2. Select the root joint, in this case the rib cage bone, and type select -hi at the command
line. (You will want to make a shelf button out of this command by drag selecting it
and MM dragging it to the shelf.) This selects the entire hierarchy of joints.
3. Choose Skeleton Set Preferred Angle to set the bones in their proper place; this will
be the angle that Maya's IK solver starts to solve from. More important, it sets the
plane described by the placement of the shoulder, elbow, and ulna bones in place, pro-
viding a direction for the Pole Vector to point.
4. Select the Shoulder control box and the CSpine joint (in that order), and go to Compo-
nent Selection mode.
5. RM the Question Mark button, and display the local rotation axes. Select the axis for the
CSpine joint and rotate it to match the Shoulder control box, as shown in Figure 1.23.
To be as accurate as possible, work from the front view and increase the size of the
manipulator by pressing the plus (+) key. (Pressing the minus (-) key decreases it.)
6. Press F8 to switch into Select by Object Type mode, and then choose Constrain Ori-
ent to constrain the CSpine joint to the control box. Now it will rotate with the Shoul-
der box. We can do this immediately because we originally selected the box and then
selected the joint to be constrained.
7. Snap the HandControl box to the wrist joint by selecting the HandControl box, hold-
ing down the v key, and moving the box to the wrist joint. It should snap to the wrist.
8. Freeze transformations on the box, and adjust the local rotation axis of the wrist to
match the box. We freeze the transformations on the control boxes so that we can zero
out any transformations and return the skeleton to the bind pose.
9. Click the HandControl box. Shift-select the wrist joint, and orient constrain the wrist
to the HandControl box.
10. With only the wrist joint selected, open the Attribute Editor and clear the X axis check
box under Joint: Degrees of Freedom to lock it in place.
11. With the Attribute Editor still open, select the ulna and check the Y- and Z-axes check
boxes under Joint: Degrees of Freedom to lock them into place.
Creating an Arm 25
12. In the Channel box, right-click the Rotate X attribute to open the Expression Editor.
We could use the Connection Editor, but because we would have to use an expression
for the right hand if we were setting it up anyway, doing it for this side gives us symme-
try, which is clearer.
13. In the Expression Editor, type U 1 n a . r o t a t e X = H a n d C o n t r o l . r o t a t e X * l . For the right
side, multiply H a n d C o n t r o l . r o t a t e X by -1 to make the hand rotate correctly.
The HandControl now rotates the wrist joint in Y and Z while the Ulna rotates in X.
Because we used an orient constraint to control the wrist, it maintains its orientation no mat-
ter how the upper arm moves.
Ninety percent of the time, I bind only selected joints to a particular mesh. For
segmented characters (which are how most characters are modeled), it is impor-
tant that those mesh objects be bound only to the necessary joints. This is faster
and easier to weight than binding these meshes to the entire skeleton.
3. If you move the HandControl box, you will see that the mesh doesn't look too bad! We
will need to fine-tune the shoulder and the elbow joint, but that is relatively easy to do.
I like to set a key at an extreme pose for the wrist so that I can scrub through the time-
line and see how the mesh deforms in motion. Another great timesaver is to use the
Attribute Collection script (available on AliaslWavefront's website—www.aliaswave-
front.com—or on www.highend3d.com) to create sliders to move the HandControl box
while you are weighting.
4. Let's set a key at frame 1 and move the HandControl box to an arm bent position at
frame 10 as shown in Figure 1.25. Copy the key from frame 1 and paste it at frame 20
to take the arm back to its bind pose. You can now paint weights on the shoulder and
elbow joints. The elbow joint is fairly easy to tweak, but we are going to concentrate
on the wrist. If you rotate the wrist in X, you can see that rotating the HandControl
too far in X distorts the mesh badly. If you know absolutely that you will not need to
twist the hand so far that the mesh collapses, you can skip the next section and move
on to the biceps. Otherwise, let's correct that weight.
Creating an Arm 27
by connecting the X, Y, and Z positions of the lattice points to the positive and negative x-
rotational values of the HandControl box.
To connect the lattice points to the rotations of the HandControl box, follow these
steps:
1. Reset the x-rotation on the HandController to 0.
2. Click the Rotate X attribute of the HandControl, and RM it to open the Set Driven
Key window. Click Load Driver.
3. Right-click the lattice, select Lattice Points, and marquee select all the lattice points.
Click Load Driven.
4. Drag select xValue, yValue, and zValue on the right side of the Driver window. Click
rotateX value in the Driver window, and then click Key.
5. Rotate the HandControl box 100 degrees in X. We want to go past the highest x-rota-
tional values we will use (usually 90 and -90). If the deformation looks good at that
point, it should look good at all values leading up to it.
6. Scale, rotate, and translate the points of the lattice so that the forearm shape is pleas-
ing. Key it in the SDK window, and rotate it back 100 degrees in the other direction.
Adjust the points, key it, and you're done! You now have a forearm that allows for iso-
lation on the hand and x-rotation without wrist collapse, as shown in Figure 1.28.
Control Placement
Although efficiency with the number of controls you
create is a virtue; placing them for quick manipulation
is essential. Minimize the extraneous clicking of the
mouse or movement of sliders, which slows the ani-
Figure 1.29: Top image: Translate Cluster along Z.
Middle image: flood/replace Cluster weight with mation process. A well-thought-out set of character
influence of 0. Bottom image: paint cluster Weight controls will maximize the number of vital movements
back until the biceps deformation looks correct. that can be manipulated on each control. Open
Blobbyman Comes to Life 31
B l o b b y m a n .ma from the CD to see an example of what I mean. Open the Outliner and click
the Translate node. Expand the Translate node completely by Shift+clicking the plus sign
next to the name to see the entire control structure laid out for ease of use. (See Figure 1.30.)
Summary
Animation does not happen by accident; it takes a huge amount of work, but the job can be
made easier with proper planning. Unfortunately, being able to plan effectively requires some
knowledge of what Maya can do, and in this chapter I have given you some information
about Maya's capabilities along with some ideas of when and how to use them. Having a
clear idea of what you want to say with the animation of your character is perhaps the single
most important tool in your animation toolbox.
By always keeping in mind what you need your setup to do when you get to the char-
acter animation stage, you will not only save time when creating and setting up your charac-
ter, you will animate more efficiently when the time comes, giving you more time to actually
animate, rather than fight technical hurdles.
Modeling a SubDivision
Surface Horse
Peter Lee
Figure 2.1: The cylinder is placed with the horse reference Figure 2.2: Extra isoparms are inserted for the horse's
picture. body and head areas.
the shaping process becomes more detailed, insert more isoparms in the cylinder by pickmask-
ing isoparm, dragging the mouse to the place where you want to place the new isoparm, and
then choosing Edit NURBS Insert Isoparms. Figure 2.2 shows the intermediate shapes.
Create two more cylinders, set the attributes to 8 sections and 5 spans, and shape the
legs in the same way that you created the body. Again, don't worry about how the image
looks from any view other than the side view. The legs need only to roughly conform to the
horse reference picture for now, as shown in Figure 2.3.
Once the horse shape roughly conforms to the reference picture, shape the horse in
other views as well. Move the legs to the left side of the body. Scale the hulls of the cylinder
body to make the horse's head and neck areas generally circular but vertically elongated and
to make the horse's body area more circular, as shown in Figure 2.4. When the rough shap-
ing is done, choose Edit NURBS Detach Surfaces to cut the body in half as shown in the
image on the right in Figure 2.4. This will reduce the task of modeling the details to only one
side of the horse.
After we convert the NURBS to poly mesh, we'll join the legs to the body. To properly
merge the legs to the body, the eight sections of the leg need to line up with the appropriate
sections of the body. Place additional isoparms in the body, as well as in the legs, and shape
them to more closely follow the reference picture, as shown Figure 2.5. (Keep in mind that
the surfaces should be a bit larger than the reference picture; they will shrink in size when
they are converted to SubDivision surfaces.) The selected surface patches shown on the right
in Figure 2.5 will become a hole after the poly conversion takes place, and the eight sections
of the leg will merge with the two sections of each of the four sides. Notice that at this point,
the half body in the picture has UV spans of 16 and 6, the front leg has UV spans of 11 and
8, and the back leg has UV spans of 10 and 8.
Building the hooves is simple. Create a cylinder with two spans, and shape it like the
image on the left in Figure 2.6. Select the bottom edge isoparm, and duplicate a curve from
it. Apply Modify Center Pivot to the curve and scale it to zero. (The curve shown on the
right in Figure 2.6 has been scaled to 0.2 for visual reference only.) Loft from the edge
isoparm to the curve to create the bottom part of the hoof. You can select isoparm using
pickmask while you're in object mode.)
Conversion to Polygons 37
Figure 2.3: The horse's legs are roughly shaped. Figure 2.4: The horse is shaped from front and top views,
and then cut into half.
Figure 2.5: More isoparms are added to the horse. Figure 2.6: Model the hoof using a simple cylinder, and use
loft to create the bottom part of the hoof.
Conversion to Polygons
To convert the NURBS body to poly mesh, follow these steps:
1. Select the NURBS body.
2. Choose Modify Convert NURBS to Polygons
3. Set the Type to Quads, and set Tesselation Method to General. Set U Type to Per Span
# of Iso Params, set Number U to 1, set the V Type to Per Span# of Iso Params, and set
Number V to 1.
4. Click Tessellate to convert the NURBS body to poly mesh, as shown in Figure 2.7.
5. Now repeat steps 1 through 4 for the legs. The hooves can be left as NURBS.
To join the different parts of the horse that have been converted to polygons, follow
these steps:
38 C H A P T E R 2 • Modeling a SubDivision Surface Horse
Modeling Details
Now that we have a rough poly mesh from which to start our detailed modeling, the saying
that the devil's in the details applies. On the one hand, the amount of work you put into the
details will determine the quality of the resulting horse. You can cheat using bump and/or dis-
placement maps, but that will get you only so far. Most of the horse's details still have to come
from its actual surface. On the other hand, the more details you put into the horse, the heavier
it becomes, and heavy models are more difficult to set up and animate, and they take longer to
render. It's important, therefore, to maintain a fine sense of balance as you work—add details
to the model, but only as many as necessary for the job for which you are building the model.
Figure 2.10: The border edges along the mouth area are extruded and scaled in.
Figure 2.12: More edge lines are added around the mouth Figure 2.13: When the faces are kept triangular, the result-
area to further refine it, and lines are also added to create a ing SubDivision surface has extraordinary points, but
nostril area. when the faces are made into quadrangular faces, they
convert to clean SubDivision surfaces.
9. Draw more edges to create an inner circular shape, and draw more edges around the
mouth area to make the mouth edges protrude, as shown in the image on the right in
Figure 2.12.
If you convert the horse at this point, the triangular faces of the nostril area turn into a
SubDivision surface with extraordinary points as shown in Figure 2.13, going from the
image on the top left to the image on the top right. But when you select every other edge of
the triangular faces and delete them as shown in the image on the lower left in Figure 2.13,
thus turning six triangular faces into three quadrangular faces, the conversion to a SubDivi-
sion surface becomes cleaner, without any extraordinary points being created, as shown in
the image on the lower right in Figure 2.13.
10. Select the center vertex of the nostril area and push it up and in.
We can come back to the mouth area later for finer sculpting, especially if you want to
build teeth into the mouth, but the work of adding edges is basically done.
Figure 2.14: More edge lines are drawn to create the eye Figure 2.15: Edge lines are created inside the quadrangular
face and the innermost face deleted to shape and refine the
eye area.
One technique you can use to keep the patches quadrangular as you refine the horse is
to push out a vertex point from the edge line it's part of, as shown inside the circle in the
image on the lower left in Figure 2.14. Once the vertex jots out from the line, it is easy to see
that you can draw another line to split the large seven-sided face into two quadrangular
faces, as shown in the image on the lower right in Figure 2.14.
You can see another technique being used to keep the faces quadrangular inside the
circle in the image on the lower right of Figure 2.14. What was a single edge in the image on
the lower left has been replaced by two edges, thus creating an extra face. One of the quad-
rangular faces changes into a five-sided face as a result, but we'll correct this as we model on.
The necessity for creating the extra quadrangular face becomes clear when you see that is
where the hole for the eye will be created.
2. Zoom into the quadrangular face where the eye will be.
3. From the area shown in the upper left image in Figure 2.15, select the face covering the
eye area. You can either delete the face, select the resulting boundary edges, and choose
Edit Polygon Extrude Edge twice, or you can use the Split Polygon tool to draw the
additional edges and delete the face at the center, to create the surface shown in the
upper right image in Figure 2.15.
4. The four-sides of the eye area are too few to properly sculpt the eye, so draw an edge
line starting from the upper middle of the eye hole, and draw another edge line starting
from the lower middle. Don't worry too much about where these lines end up for now.
Your only concern at this point is that the four sides have become six sides, as shown in
the lower left image in Figure 2.15.
5. Draw yet another edge line going out to the side of the eye, and apply another Extrude
Edge to the boundary edges of the eye so that you can create thickness for the bound-
ary area, as shown in the lower right image in Figure 2.15.
Now, let's turn to the way the edges coming out of the eye area are connecting to the
rest of the head and clean up the topology. The area stretching around the eye has some trian-
gular faces, shown in the upper left image in Figure 2.16, which create extraordinary points
when converted to a SubDivision surface, as shown in the upper right image in Figure 2.16.
When the topology of the area is changed to quadrangular faces as shown in the lower left
Modeling Details 43
Figure 2.16: Refining the faces to quadrangular faces will Figure 2.17: More details are added to the sides of the eye
create a much cleaner SubDivision surface. area to fine-tune the eye shape.
image in Figure 2.16, however, the resulting SubDivision surface is much cleaner, as shown in
the lower right image in Figure 2.16. The subtle changes in the lines make a lot of difference
in the SubDivision surface conversion.
As you sculpt the eye area more, you'll soon find it necessary to further divide the side
patches of the eye into smaller faces so that you can make the sides of the eye fold tightly.
Notice how the smaller divisions go from the upper left image to the upper right image in
Figure 2.17. A careful examination will show that all the new edges are quadrangular
patches. This area, therefore, will convert to a SubDivision surface without any extraordi-
nary points. The eye area that you've seen so far is actually a flattened-out version of the
model. I did this so that you could more clearly see the topology of the patches. In the lower
left image in Figure 2.17,1 sculpted the same surface with the same topology more realisti-
cally. Although the points have been moved around, making some parts of the area difficult
to see, the topology of the images in the lower left and the lower right are the same. The
image on the lower right shows the final resulting SubDivision surface area of the eye.
Figure 2.18: Edge lines are drawn to create a five-sided Figure 2.19: The back of the ear area is refined.
face, which is then extruded to create the ear.
3. Divide the faces at the back of the ear into triangular faces as shown in the lower left
image in Figure 2.19. Notice that a diamond-shaped face has also been extruded out,
which will become the inside of the ear.
4. Delete the three existing edges that shape the triangular faces at the back of the ear
area to create new four-sided faces as shown in the lower right image in Figure 2.19.
Notice that the original triangular face by the back of the ear is now also quadrangular.
As for the front part of the ear, start with a flat five-sided face as shown in image A in
Figure 2.20, but extrude a diamond-shaped quadrangular face as shown in image B. Push
back the fifth edge to the back of the ear to create an arch for the inner backside of the ear as
shown in image C.
Detailing the inner parts of the ear is straightforward. Follow these steps:
1. Select the inner diamond-shaped face, and extrude it inward twice as shown in image
D in Figure 2.20.
2. With the topology created, push the inner parts in and down, as well as the bottom
points of the ear, as shown in image E in Figure 2.20.
3. Convert the poly surface to a clean SubDivision surface as shown in image F, and
tweak the points to refine the shape of the ear. Convert back to a poly surface for fur-
ther modeling.
Figure 2.20: The front part of the ear is refined with extrusion, and one of the vertices is pushed to
the back of the ear.
Figure 2.22: Any remaining triangular faces around the Figure 2.23: Extra edge lines are added around the back
chest area are edited to become quadrangular faces, and leg area, and that area is further refined.
conversion to SubDivision surface shows if everything is
indeed quadrangular around the chest area.
Insert a line from the top of the body to the stomach area as shown in the lower left image
in Figure 2.23. Sculpt to more clearly define the back leg and the stomach, and add another
line from the top of the body all the way down the back leg as shown in the lower right
image in Figure 2.23.
From the back view of the horse as shown in image A in Figure 2.24, draw another line
on the back leg as shown in image B. Draw a line straight down the horse's behind until it
joins the first edge of the back leg as shown in image C. The lines going down to the back
center should also be horizontal. Delete the extra vertices at the center as shown in image D.
Draw two more horizontal lines as shown in image E.
Our last detailing task in terms of sculpting is to tighten the back leg, making it look
leaner and more muscular as shown in the bottom image in Figure 2.25.1 moved and tight-
ened the edges in such a way that it may appear as though I added extra lines , but compare
the back and side views in the top and bottom images, and notice that no new lines have
been added.
You can now start on the final refinements of the patches. Figure 2.26 shows three
examples of the triangular or five-sided faces being redrawn into quadrangular faces. Once
you get used to "seeing" how the lines should be redrawn in any situation, splitting faces and
deleting edges to clean up is not a difficult task.
To confirm that you have turned the parts of the horse you are detailing into quadran-
gular faces, convert the horse into a SubDivision surface and see if you get extraordinary
points, which are easily seen by the extra edges that surround them. Compare the images in
Figure 2.27, and notice that extra edges surround the extraordinary points in the image on
the left.
When you are satisfied that the surface is clean, get into the front view window, snap all
the vertices along the middle edge to the Y axis using the Snap to Grid function, and apply
Polygons Mirror Geometry to the horse, making sure that the Mirror Direction is set to -X
in the option box. Some of the open edges such as the eyes, the mouth, and the bottom parts of
Modeling Details 47
Figure 2.24: More lines are drawn around the horse's behind, and that area is further refined.
Figure 2.25: The back leg is refined to make it look more Figure 2.26: Triangular or five-sided faces are redrawn to
muscular. become quadrangular ones.
48 C H A P T E R 2 • Modeling a SubDivision Surface Horse
the legs might merge as well when they shouldn't. Simply delete those unnecessary faces. All
the center edges of the horse should merge, but if some do not, you can merge those edges
manually using the Edit Polygons Merge Edge Tool. On rare occasions when merging does
not work (usually because of oppo-
site normal vectors), you may find it
most efficient to simply delete one
of the two faces and create a new
one using Polygons Append to
Polygon Tool. Last, fine-tune the
converted SubDivision surface
horse in Standard Mode or Polygon
Mode to get the final shape you
want. Figure 2.28 shows the final
horse model as a wireframe, and
Figure 2.29 shows the final horse
model as a rendered image. You can
also see an animation of the tex-
tured and rigged horse by opening
the c h 2 h o r s e w a l k . m o v file on the Figure2.27:The extraordinary points are shown in the
CD-ROM. image on the left.
Figure 2.28: The wireframe of the final horse model is shown in shaded mode.
Summary 49
Summary
This chapter took you step by step through the process of creating a SubDivision surface
horse. Many modelers in the industry are using this process, in which the basic shapes are
built with NURBS patches, converted to polygon patches, merged, refined, and then con-
verted to a SubDivision surface at the very end. Using this process, we bypass the tricky
work of keeping tangency between NURBS patches to make them appear seamless. As you
refine the polygon model, always edit the edges in such way as to create four-sided faces so
that there will be no extraordinary points when the model becomes a SubDivision surface.
As far as I know, there is no exact technique or method for refining or redefining the polygon
model's topology to create a clean final SubDivision surface. Nevertheless, experienced mod-
elers recognize a cleanly built SubDivision surface when they see one: it has no extraordinary
points, the edges form flowing lines that do not get cut off randomly, and those lines go
around the surface areas in such a way as to produce correct deformation when properly
weighted. But the best method for an aspiring modeler, in my opinion, is surely the one you
acquire as you yourself cut the edges.
Organix—Modeling by
Procedural Means
Mark Jennings Smith
devious in its simplicity, and in homage I call it systematic multireplication and include it in a
wider discipline I began to call organix.
Figure 3.1: Photos A, B, and C are various angles and magnifications of an animal bone. Photos D
and E are various magnifications of a tree seed. Photo F is a mature grass species.
Stop and Smell the Roses: A Primer 53
The other thing you notice about natural objects is the duplication that takes place when
nature creates. Replicating the simpler form leads to the more complex and interesting shape.
The computer is a perfect workhorse for the geometric math involved in replicating forms.
Every time you create a bicycle tire and spokes or a ceiling fan, you borrow from nature.
Nature is not always exclusively organic. You can also find artistic inspiration in grav-
ity, thermodynamics, and surface tension. As a reminder, I keep a chunk of wax near the rest
of my computer adornments. I removed the wax from a dish that had been filled with coins.
In the center of the dish was a large wine-colored candle, which eventually overflowed its
seeping hot melted wax onto the coins. Retrieving my coins uncovered an amazing and
inspiring form, shown in Figure 3.3. Gravity, thermodynamics, and surface tension were
among the contributors to this bit of natural art.
Figure 3.4 shows an image modeled in Xfrog and rendered in Maya. It was the direct
result of finding the wax. This image is an example of how all experiences, great and seemingly
figure 3.3: An image of coins imprinted in melted wax Figure 3.4: The resulting image derived and inspired from
the wax
54 C H A P T E R 3 • Organix—Modeling by Procedural Means
insignificant, whether you are conscious of them or not, play a role in your creative strength.
For so many good reasons, go outside and smell a rose.
/ thought I was a true geek the first time I sat in a chair and started to notice the
specular highlights of the chrome tubing, and the diffuse shadows the fabric played
over the wood. That was a long time ago, when it was all still so new. And now I
notice that a slight reflection, a glint off of an edge of glass, can instantly put me in
that mode where I think about sitting with sliders trying to recreate those visual
instances. (Mark Sylvester, Ambassador 3D)
Now let's duplicate the group a bit and change our single organix primitive into a more
complex shape.
4. Choose Edit Duplicate Set Translate to 0, 0.3, 0.3; set Rotate to 10, 0, 10; set
Scale to 0.9, 0.9, 0.9; set Number of Copies to 39; and set Geometry Type to Instance,
as shown in Figure 3.6. Set Group Under to Parent. That's it. Click Apply and check
your result. You should have something that looks like Figure 3.7. Move your camera
around a bit and check out the shape.
5. After you check your result, render it out, and then save your scene.
Figure 3.8 shows several rendered angles of the new object, which is organic looking
indeed. The simple shader that was placed on our base primitive object takes on a whole new
texture life after it has been assigned individually on the newly replicated objects. The repeti-
tive nature of the texture is reminiscent of thousands of insects and reptiles. The cone point
takes on new significance as well because it appears to be part of a series of thorn, claw, or
spikelike barbs. Notice too that we used instances instead of copies, which saves rendering
time and memory. Using instances also gives us our backbone for entry-level animation, with
which we will deal later.
Figure 3.7: A hardware-rendered result of what your result should look like with the Outliner settings
56 C H A P T E R 3 • Organix—Modeling by Procedural Means
Figure 3.8: A collage of four separate angles of the new primitive showing its diversity and ability to
blend separate components into one
By simply altering the pivot point a single sphere can have wildy different duplication
results when performed with the exact same parameters. The three sets of images shown in
Figures 3.9 through 3.17, show the sphere before and after duplication. The first image of
each series displays the highlighted relative position of the pivot point prior to duplication.
Figure 3.9: The first image Figure 3.10: Wireframe version of the Figure 3.11: Final rendered version of
render of Figure 3.9 shown at render Figure 3.9.
angle.
Figure 3.12: The second image Figure 3.13: Wireframe version of the Figure 3.14: Final rendered version of
render of Figure 3.12 shown at render Figure 3.12.
angle.
Figure 3.15: The third image Figure 3.16: Wireframe version of the Figure 3.17: Final rendered version of
render of Figure 3.15shown at render Figure3.1S.
angle.
2. This object is set up for some interesting movement. Zoom your perspective view out
so that you can get a decent view of the entire organism.
3. Now, let's translate the original carapace node. Test it first by a single translation in one
axis. Notice how the entire shape squirms in accompaniment. After each single translation,
be sure to undo again by pressing z (Undo) to return to the original state. Try two and then
three translations before returning to the original state. Each translation is sequentially
58 C H A P T E R 3 • Organix—Modeling by Procedural Means
As you recall, we instanced one original object to create our organism. Pros and cons
are associated with instancing, but instancing suits our purposes famously. Here are a few
rules to remember about geometry copies and instances:
• Copies are just that—identical copies of the geometry. Instances rely on the data that
composes the original geometry. An instance is a displayed representation of that origi-
nal geometry.
• Using instances takes much less memory, renders faster, and reduces the size of a scene file.
• You cannot alter instances directly. Any change in geometry placed on the original is
reflected immediately in the instances.
• Instances cannot be assigned alternate shaders. Changes to the shading network of the
original object are reflected in the instances as well.
• You can duplicate and instance lights, but instanced lights will have no effect in the
scene (so I'm not really sure why you would want to do that).
The animation S q u i r m 3 2 0 Q T s o r e n 3 . m o v on the CD-ROM expresses a great range of
organix movement. We derived something quite complex from simplicity. As you begin to
add other elements to the animation equation, you will certainly deal with more variety. Let's
take animation a few degrees further by adding some tricks. On paper it will not look like
much, but the results will show otherwise.
Figure 3.19: A cone with its pivot point offset from its original location within the cone now is
markedly distant from its geometry.
4. Press the Insert key (the Home key on a Mac system) so that you can move the pivot
from its original position. Translate the pivot over eight units in the X direction. If
your cone moves, you didn't press Insert properly or at all.
5. After you establish the new pivot position for the cone, press Insert again to disable the abil-
ity to translate the pivot and return you to the normal possible translations of the group.
6. Now that the s i n g l e _ t h o r n _ c o n t r o l pivot is offset, move the group until its pivot is at
0,0,0. Figure 3.19 shows the results.
By offsetting the pivot point of an object or group, the Duplicate tool becomes
much more interesting, especially for working with organix forms. This offset tech-
nique is a primary tool for achieving interesting and varied results.
The Duplicate Options dialog box displays the values from its previous use. This fea-
ture can be great for helping you to remember your last set of parameters if you
want to use them again. It also comes in handy for slight variations of those original
parameters, lf your new parameters are completely different, it's best to start from
scratch (with the default settings). In the Duplicate Options dialog box, choose Edit
Reset Settings to reset the parameters to the default. I find that resetting also
gives you a better mental image of what you are doing and that you are less likely to
make input mistakes.
Abusing the Cone 61
Now let's run this baby around the proverbial horn by duplicating it.
7. Change to the front view if you are not there already. Select s i n g l e _ t h o r n _ c o n t r o l to
make it active, and then choose Edit Duplicate to open the Duplicate Options dia-
log box.
8. Choose Edit Reset Settings to flush out the old parameters.
9. We want to create 39 more instances of this group for a total of 40, so set Number of
Copies to 39, and select Instance as the Geometry Type. The only other parameter we
will change is to adjust the rotation of each instance by 9 degrees on the Z-axis.
Remember that because our parent group has an offset pivot point, the result will be
dramatically less crowded than our previous example.
10. Click Duplicate to see your result. Figure 3.20 shows what you should have, and the
inset shows the parameters used to achieve it.
62 C H A P T E R 3 • Organix—Modeling by Procedural Means
We want to create a cluster for every planar level of CVs on the cone, using the method
just outlined for the point. Out of the bottom two levels we will create a single cluster and
call it t h o r n _ b a s e . Figure 3.23 shows the clusters and their naming conventions from the
point to the base.
There is not a specific end goal here. What I'm presenting is more a theory-based
concept. These techniques are guidelines for creating the abstract, which can
serve a useful purpose for attaining a certain look or effect. Beyond the concepts,
there is no right or wrong here, and experimentation and imagination are the key
to creating something interesting. You'll never see a tutorial for kaleidoscope
operation stating "Shake your kaleidoscope this way to create this pattern."
64 C H A P T E R 3 • Organix—Modeling by Procedural Means
18. Open the Outliner, and slide the seven clusters into view. We are going to create the
most basic cyclical animation possible (one that repeats exactly after a certain number
of frames).
19. We want to create a 90-frame animation, so set the Time Slider for 90 frames. Select
the cluster t h o r n 5 h a n d l e . Put the Time Slider first on frame 0, and create a key.
20. In the Channel box, choose Channels Key All. Now slide your Time Slider to frame
89, and once again choose Channels Key All in the Channel box. Slide down to
Frame 30, and move cluster t h o r n 5 h a n d l e 0.4 units in the X direction and create a key.
21. At frame 60, move t h o r n 5 h a n d l e to -0.4 in X and choose Key All again. You have now
created a short cluster animation. Set the Time Slider to run from frame 1 to 89 (not 0
to 89) and play it back.
This is not the most exciting animation in the world, but it will prove quite useful in
adding secondary animation. Remember that by adding this animation to the original cone,
all the cones will mimic this motion. You buy yourself a lot of syncopated razzle dazzle with
little effort.
22. Stop the animation, and open the Outliner. We now want to turn the visibility back on
for those 39 instances that we made disappear. Select the 39 numbered instances of
single_thorn_control##.
23. In the Channel box, find the Visibility parameter. Change Visibility to On or enter 1 in
the parameter box. All 39 cones should reappear, showing the full complement of 40
cones again.
Now if you scrub through your 90 frames, you will see that all the cones share the clus-
ter's animation. Make changes to the original node, and run the animation again. You have
now added another level of complexity to your organism. Additionally, just as with our first
organix primitive, each i n s t a n c e d _ t h o r n _ c o n t r o l # # child of the s i n g l e _ t h o r n _ c o n t r o l # #
parents is tweakable. Since each one is an offset of the next, they will control the whole
organism slightly differently. Close this scene and open c o n e s 3 c . m b .
This scene is identical to the previous scene except that I spent a little more time on the
animation of the cone. I actually took advantage of all the clusters that we had previously made.
24. Set the Time Slider to 1000 frames, and then play the animation. You can see the
resulting animation created by using the extra clusters. Again, as we have done previ-
ously, make the rest of the organism visible.
Without adjusting nodes you can see that the animation already has a cool impact on
the form. Once you begin tweaking the nodes, the results will be more prominent. Included
on the CD is a 1OOO-frame animation that incorporates the scaling, rotation, and transla-
tion of instanced nodes. It also incorporates the secondary cluster animation and some cam-
era and light movement, which, as you can see, adds yet another level of complexity for
experimentation.
prevalent attitude of operating system snobbery gets us nowhere as artists. Even if you make a
heavy investment of both time and money in a certain software package, don't let that blind you
to a nifty toolset if it is within your reach. Later in this chapter, I'll describe some software I use
in concert with Maya.
When Paint Effects was introduced into Maya, it was an astounding advancement and
was used for everything from flowing grass to perspiration. The sheer volume of animatable
parameters was enough to boggle the mind. I soon got bored painting trees and decided to
see how it could be used otherwise. While testing the limitations of Paint Effects, I was dis-
heartened to find that you couldn't paint polygons. I model in other programs for various
reasons, and that makes importing NURBS into Maya a problem.
My trouble arose when I tried to paint hair on a polyface model. But it didn't take me
long to figure out that I could parent a Maya NURBS proxy to my object, fashion it similarly
in form, make it paintable, and then turn it invisible. This became the cornerstone of cool
things to come. The following example shows a model of my sister-in-law's face. The poly-
gon model was fitted with a "NURBS skull" proxy for the purpose of receiving Paint Effects
strokes. As in real life, I made sure that the hair covers her unsightly scars (uh, I mean poly-
gon edges) (see Figure 3.24).
It is easy enough to introduce a NURBS sphere into the scene that would serve as Jen-
nifer's replacement skull. The skull does not need to actually look like a skull, but it is impor-
tant to make sure that the paintable NURBS surface will indeed fit within the edges of your
Figure 3.24: The imported 3D model of my sister-in-law Jennifer poses a problem for Paint Effects hair
attachment. Maya does not yet allow for strokes to be attached to polygons in the normal fashion.
Alternate Uses of Paint Effects 67
polygons. In this example, hair, which protrudes from the scalp anyway, will suffer little
from this cheat. Hair can hide scars, tattoos, hickeys, and, in this case, an actual cranium!
Over the years, I've found it best to create an obnoxious color for my proxy, because this
will make it stand out amidst the hair, pointing out heavy clumps of hair while immediately
identifying bald spots.
A single Paint Effects stroke can cross multiple surfaces. This is a cool feature and is
open for serious experimentation. I noticed while painting different hairstyles on
various-shaped head models that a proxy skull wavered dramatically in shape from
character to character. This posed a challenge depending on the hairstyle you are
trying to achieve. I found it easier to place multiple paintable surfaces together as a
great foundation for laying hair. A stroke can be continued across several NURBS
surfaces, making it easier to judge how a Paint Effects curve will react. A single curve
can then be tweaked further by nudging whole NURBS surfaces around (usually
spheres) for the right look. Try it. It will give new meaning to CG hairstyling.
Let's try painting a few locks of hair on a proxy skull. Notice that in the process we are
painting across multiple surfaces. First, open the file s c a r y _ f a c e . m b on the CD. In this file, I
added supplemental geometry to the side of the model's head. I was having difficulty clearing
the side of her cheekbones properly. By adding an elongated NURBS sphere (see Figure 3.25),
I could easily start from the top of her head and sweep around her cheek without making the
hair protrude oddly through her lovely high cheekbones.
When you are reasonably happy with your skull adjustments, it is time to test the head
for probable adjustments. Select all the NURBS surfaces you added as your proxy, and then
choose Paint Effects Make Paintable. Make any brush choice from your Visor and add a
few strokes. Remember that holding down the b key allows you to scale the size and flow of
your brush stroke. Whatever stroke you use, be it corn or red Mohawk, you'll see it follow
your proxy skull. Figure 3.26 should look something like your image.
Figure 3.25: A side wireframe view of two distorted Figure 3.26: Two brightly colored NURBS spheres are
NURBS spheres that will stand in for already removed made paintable for accepting Paint Effects brush stokes.
polygon geometry. Because Paint Effects strokes will not Here a variety of vegetation crosses the skullcap and cas-
adhere to polygons, I used NURBS replacements. cades down the side of the head. Notice that a brush stroke
will continue across two separate paintable surfaces.
68 C H A P T E R 3 • Organix—Modeling by Procedural Means
Figure 3.27: The top-down view of the bead model with real Paint Effects hair brush strokes. The
NVRBS proxies used to replace the skull are still in place but have been toggled invisible in the
Channel Editor.
Now make all your proxies active at the same time. In the Channel Editor, locate Visi-
bility and turn it off. Now all your proxy skulls should have disappeared. Try adding a few
more strokes to your "invisible" objects. The strokes will continue to cover the surface
regardless of the status of their visibility. As I mentioned, the obnoxious color shaders
applied to your proxies serve a purpose. Make your proxies visible again. You'll see how
much easier it is to paint hair when you know where you're painting. Try painting some
decent locks on the scary face model. See if you can complete the entire head, making it as
real as you can. It's actually quite amusing. Figures 3.27 and 3.28 show one possible out-
come to our model's hair replacement surgery.
How does this fit into organix? Well, the realization that I could paint across more than
one object and turn those object(s) invisible prompted me to consider further uses—or
abuses—of the technique. The ability of a single Paint Effects stroke to cross multiple sur-
faces made me curious. It added a whole new realm of possibility to my abstract beasties,
while creating a new way to toy with Paint Effects strokes.
Figure 3.28: The rendered result of a few well-placed Paint Effects strokes does a fairly decent job of
giving ]ennifer a new cyber-style.
Figure 3.29: The Duplicate Options dialog box Figure 3.30: The perspective window
Lets look at a few examples of duplicating Paint Effects brush strokes as geometry. Fol-
low these steps:
1. Load the Maya binary S i n g l e S t r o k e . m b from the CD, and then in the Outliner, choose
Windows Outliner.
2. Select s t r o k e G o 1 dl from the Outliner and rotate it 90 degrees on the X-axis. Now ren-
der it. The brush stroke is one of the default Paint Effects metal brushes in Maya.
3. Now let's duplicate the brush stroke a bit and change its form. Choose Edit Dupli-
cate We will want to alter some parameters. Change Translate to 0, 0, 1.5. Change
Scale values to 1.0, 0.8, 0.7. Set the Number of Copies to 19. Geometry Type is
Instance. Set Group to Parent. That's it! Make sure s t r o k e G o l d 1 is still active. Click
Apply and check your result. Figure 3.29 shows the correct parameters for the Dupli-
cate Options dialog box.
4. Check your result in the per-
spective window, as shown
in Figure 3.30, and then ren-
der it out. Your results
should look something like
Figure 3.31, depending on
your camera angle.
5. In the Outliner, select all
s t r o k e G o l d # brush strokes
and group them together.
Label this new group
S t r o k e G o l d G r o u p l and close
it. The curve that you should
not select is c u r v e G o l d .
Rename that to S t r o k e G o l d -
C u r v e l C o n t r o l . Save your
scene file. It is S t r o k e G o l d -
G r o u p l C o n t r o l .mb on the CD. Figure.3.3l:The results of duplicating Paint Effects
Organix and Paint Effects 71
Figure 3.32: Setting the parameters for the duplication of Figure 3.33: The wireframe perspective view of the dupli-
StrokeGoldGroupl cation result
chose one of the metal brushes. I painted a single brush stroke onto P r o x y _ S p h e r e ,
which I had previously made paintable. The intention here was to multireplicate the
curve many times over and then make P r o x y _ S p h e r e invisible.
Here are some things to note:
• A Paint Effects brush stroke applies to a NURBS surface similar to curves on surface,
but it is not attached to the surface exclusively. In other words, the brush stroke can be
removed from the surface of the NURBS sphere as an independent node, yet retain its
original painted shape.
• Altering the shape, translation, or rotation of P r o x y _ S p h e r e will affect its assigned brush
strokes in kind, regardless of the proximity of each to the other. See Figure 3.37.
• Paint Effects brush strokes can translate, rotate, and scale independently of the object
on which they were painted. See Figure 3.38.
2. Load C h i n e s e _ D r a g o n _ R e v i s i t e d B . m b from the CD.
3. In the Outliner, open the Tendrils group. Paint Effects brush stroke s t r o k e l is the sole
child of the Tendrils group. This is the stroke painted on P r o x y _ S p h e r e . Perform
actions on both s t r o k e l and P r o x y _ S p h e r e , and notice how they affect each other. The
earlier bulleted items may seem trivial, but, as you can see from this example, they are
at the core of organix animation using Paint Effects.
Organix and Paint Effects 75
Figure 3.37: Various simultaneous actions on assigned brush strokes by altering the paintable object
Figure 3.38: Various actions of a Paint Effects brush stroke performed independently from the
NURBS object on which it was painted
P r o x y _ S p h e r e , and run through the Time Slider again. You'll see now that the curve on
surface animation was just an illusion.
6. Reload C h i n e s e _ D r a g o n _ R e v i s i t e d A . m b . (Make sure you load C h i n e s e _ D r a g o n _ R e v i s -
i t e d A . m b , not C h i n e s e _ D r a g o n _ R e v i s i t e d B . m b . ) OpentheTendrils group, and you will
see the rest of the duplicated brush strokes. There are a total of 60 instances of s t r o k e 1 .
By selecting various numbered brush strokes, you can easily see that all instances do not
lie on the surface of P r o x y _ S p h e r e . Performing any actions directly on any single brush
stroke will not affect the others as you might expect. Try it for yourself. If you lose the
original configuration, simply reload C h i n e s e _ D r a g o n _ R e v i s i t e d A . m b .
• Organix and Paint Effects 77
Great! Now we can begin to see something more interesting by toying with
Proxy_Sphere.
7. Load C h i n e s e _ D r a g o n _ R e v i s i t e d C . m b .
8. In the Outliner, select P r o x y _ S p h e r e and go to the camera l perspective view if you are
not already there.
9. Press w on the keyboard to select Translate mode, and select P r o x y _ S p h e r e at its origin
handle. Moving the sphere around in 3D gives you a good idea why I called my scene
Chinese Dragon.
" The original brush stroke, while animatable, will not transfer its actions to its 59
instanced strokes as you might suspect. Scrolling through the Time Slider makes
it obvious that the instances are reacting to the animation on s t r o k e 1 . However,
I created this animation before creating the instances by animating the curve
created by the initial brush stroke. This curve has since been deleted, but would
be required to perform instanced stroke animation.
Figure 3.39:1 modeled the "Pinos Mellaceonus" insect in Xfrog. Texturing and other models were
done in Maya.
to create both real and abstract models. The palette of tools is not overwhelming but allows
you to fashion some highly detailed models, be they imagined or real. A reasonably profi-
cient user can model anything from a pineapple to a whole banana tree (see Figure 3.40).
Earlier I mentioned that you can finely craft the intimate parts of a flower to exacting detail
using this program. Although you would be hard pressed to model a horse (impossible I
think), anything natural that conforms to organix type rules is fair game for Xfrog. A fly's
eye to rows of teeth or a volvox to salmon roe, and any plant imaginable, are well within the
grasp of Xfrog. It also has some interesting animatable features that allow you to animate a
finely detailed tree or plant, from seed to maturity.
Greenworks has written a Maya plug-in that imports Xfrog-created models and animation
(. x f r files). As you go through the Greenworks tutorials on the CD, some texture problems
might arise. Xfrog uses PNG and the full supported TIF image formats. Maya once read PNG in
an earlier version, but this functionality was removed for some reason in later versions. Maya
has never correctly read the full flavor spectrum of the TIF file format. These are shortcomings
on Maya's part that I have been lobbying to have corrected, but at present they are a bit of an
Organix and Paint Effects 79
Figure 3.40: The Xfrog interface with the highly detailed and textured model of a banana tree
intractable problem. However, the Maya plug in does a great job of importing the models and
the Xfrog animation as well. The CD includes a few Xfrog files that I have converted into Maya
4format. Also on the CD is an original abstract animation ( X f r o g _ p r i m e d 3 2 0 Q T s o r e n 3 . m o v ) I
created in Xfrog and then imported to Maya for texturing and rendering.
Figure 3.41: A series of images from the award-winning animation "Panspermia" by Karl Sims.
See h t t p : / / w e b . g e n a r t s . c o m / k a r l / i n d e x . h t m l for more information on Karl's work. All
images © Karl Sims. All rights reserved.
Organix and Paint Effects 81
Summary
As we come to the end of this chapter, I offer some final parting ideas and thoughts of what
was hinted at yet not given treatment here. The idea of passing a brush stroke across
multiple surfaces led me to think about Paint Effects and Maya Dynamics. The ability to
apply hard/soft body dynamics to Paint Effects laden geometry gives some added intrigue to
animation. It also then stands to reason that we can affect Paint Effects brush strokes in
other dynamic ways as well. Forces such as gravity hold interesting promise for experimenta-
tion too. I have done interesting experiments with deformers and constraints (springs, nail,
and hinge). MEL as a programming language is formidable in developing self-evolving Maya
worlds right within the program itself. I have included on the CD some simple Maya binary
files to enable you to delve into a few of these issues. By dissecting these binaries, you might
spark your own ideas and move in new directions. These files are located in the chapter
directory under B o n u s _ B i n a r i e s . Whatever you do, experiment with abandon, and have fun!
Chapter Four: Animation and Motion Capture-
Working Together
Sensors capture the data and then plot the data's individual coordinates in space. The
most popular systems use optical or magnetic sensors. Optical systems involve special cam-
eras that surround the capture area, facing the subject. Small reflective spheres are placed in
strategic points on the body, and the coordinates of each ball in 3D space are captured and
translated into data that can be read into an animation program. Magnetic systems rely on
cables to transmit the data of the coordinates.
Another term you'll hear in relation to mocap is performance animation. Performance
animation typically refers to mocap that focuses on human motion, for example, the specific
style of a famous person's movements. Often, the point of choosing performance animation
as a technique is to capture the nuances of the movements specific to a personality. Andre
Agassi's tennis moves or the distinct dancing of Michael Jackson are two examples of
celebrities who have been mocapped for the purpose of recording their distinct style of
movement.
James Cameron's film, Titanic. Titanic was a significant film, especially in the computer
graphics industry, for its visual effects. Some of those effects were produced by animating
human characters through the use of keyframing, motion capture, and at times a combina-
tion of mocap and keyframing. Mocap was used mainly in two ways. For normal actions,
such as people walking around on the deck, it was often used directly. It was used as refer-
ence for some of the stunt work such as characters climbing on the railings.
Rotocapture and rotoscoping are similar techniques in that they both require an artist
to animate on top of existing reference material. The main difference is that in rotoscoping
the artist animates over a 2D plate or image, and in rotocapture the artist animates over 3D
motion data. The limitation of rotoscoping is that you have only one perspective to use as ref-
erence, whereas with rotocapture you can move the camera virtually anywhere in 3D space.
Animators have encountered two problems with rotoscoping: it can be extremely time-
consuming, and it can be creatively frustrating. In 3D animation especially, rotocapture is
becoming more antiquated with the advancement of motion capture technologies and ani-
mation pipelines. These improvements let you use motion capture as a basis for your anima-
tion. In essence, mocap delivers the basic elements of weight and timing with a foundation
grounded in real-world physics. You can layer keys on top of motion capture data and add
layers of more creative and thoughtful elements to the character's performance. You can also
use the raw mocap as reference for timing in animating characters, while preserving the abil-
ity to keep more creative control over the process.
90 C H A P T E R 4 • Animation and Motion Capture—Working Together
• FILMBOX,at www.kaydara.com/
• Motion Analysis, at www.motionanalysis.com/
• VICON,at www.vicon.com/
• Biovision,at www.biovision.com/
• House of Moves,at www.moves.com/
• Motion Analysis Studios, at www.performancecapture.com/
• The Illusion of Life: Disney Animation, by Frank Thomas and Ollie Johnston (ISBN-0-7868-6070-7)
• All of Eadweard Muybridge's photographic studies of figures and animals in motion
All animation for Final Fantasy: The Spirits Within needed to be completed wlthin
just over a year and half, and that goal was met. Production for the entire movie,
however, spanned approximately four years.
Second, we decided to ensure that the animation department had the necessary tools to
animate over the motion capture, in order to tweak it as a whole or in parts. Square USA
developed a pipeline to ensure that the animation and mocap departments could work
together. Square USA created a proprietary toolset in Maya that allowed a "hybrid" motion
capture and keyframe animation process. Using the methodology of a motion-animation
pipeline that supported a workflow of motion capture and animators working together, we
were able to decide on a per shot level how much mocap to use straight out of the box and
how much to animate over or replace completely. It wasn't always one or the other—all
mocap or none. For the humans, most of the time it was a blend. For example, we might
tweak some parts of the body for certain shots. Aki's hips might need more rotation, or parts
of the body, such as the arms, might be keyed, and others, such as the legs, might be
mocapped. Sometimes we combined keyframe animation with mocap to accommodate a
change in a character's action.
Cleaning Mocap 91
Figure 4.1: Square USA utilized motion capture to create realistic human subtleties and nuances for
the performances of computer-generated characters in Final Fantasy: The Spirits Within. Copyright
© 2001 FF Film Partners. All rights reserved. Square Pictures, Inc.
When details are overlooked in the capture session, the result can mean more
work than anticipated for the animation department. Generally, the primary
reason for unnecessary changes to the motion capture data is that intensive pre-
production development has been overlooked or glossed over. You must plan
every aspect of your character, from what they say, to who they interact with,
and how they do it. Missing small details in casting, timing, set design, dialogue,
or interaction will catch up with you in motion editing or post motion capture
animation production and affect your delivery schedule and your budget.
Cleaning Mocap
You might have heard that after a motion capture session is completed and the data is
handed to an animator, it still needs cleaning. In a large production pipeline, a service bureau
or motion capture technicians may have done some cleaning before an animator deals with
the mocap data. Depending on your pipeline, the data may still need some attention to
improve the quality of the result.
What does it mean to "clean" mocap? Standards of clean vary. For our purposes, we
define clean as data that preserves the key poses of the performance and does not have tech-
nical issues such as flipping joints or noisy vibrations in the curves.
92 C H A P T E R 4 • Animation and Motion Capture—Working Together
Mocap data can be very clean if the conditions at the time of capture are ideal, but
that's not always the case, unfortunately. In the past, an animator often ended up massaging
the data one way or another to get the desired result.
You need to make sure that the motion capture team you work with can provide clean
data. Ask for sample data to see how much, if any, clean-up you need to do. Your data might
need to be cleaned for many reasons, but the motion capture team should be experienced
enough to deliver clean data to you. The quality of your data has a lot to do with the artistic
and technical capabilities of the motion capture data tracker and editor.
a.
b.
Figure 4.2: In the top image, motion capture data shows noise. In the bottom image, a cleaning filter
operation has been applied to the curve to minimize the noise.
Cleaning Mocap 93
Because noise is typically a high-frequency jitter, sometimes it's more obvious in a high-
resolution skinned model than in a low-resolution animation proxy model. If in doubt, double-
check the final animation on a skinned character before calling it finished.
Figure 4.3: The model with some of the controls displayed in the Layer Editor
book (see Chapter 1, for example). We designed these examples to have a plug-and-play feel to
them so that you can simply load the mocap and start animating on top of it.
The motion capture that you will use for these examples is that of our model walking a
straight line on a flat surface. The walk is just right for the shot, but the director wants a
bump in the surface. For the first example, you'll use the mocap data and create an anima-
tion over that, featuring the model's reaction to the surface's change in height.
Cleaning Mocap 95
If you were to animate the mocap skeleton, you would edit the keys in the motion cap-
ture f-curves, which permanently affect the mocap data. We highly recommend not doing
that because of its destructive nature. In this example, we'll animate an offset skeleton that is
driven by the mocap data. Using an additional skeleton, you can adjust the mocap without
destroying it. You can use a setup in which the identical skeleton—the offset skeleton—has
its IK handle, joints, and pole vectors point-constrained to locators that are parented to cor-
responding joints on the mocap skeleton. When there are no adjustments to the offset skele-
ton, the two skeletons are in perfect alignment and overlap. When you move the locators of
the offset skeleton, the second skeleton becomes apparent. You can animate the controls to
literally offset the mocap in order to get the desired result for the animation. Essentially, you
animate slight changes on top of the mocap data while retaining the capture data's purity.
This example illustrates how slight changes to the offset skeleton drastically alter the
action of the character without affecting the motion capture data. A practical use of the off-
set skeleton involves slight retargeting of the controllers. For example, you capture an actor
reaching for a doorknob, turning it, and opening a door, but in the scene file the doorknob is
at a different height than the original. Simply move the offset locator for that hand to the
new location, and the skeleton will follow. The f-curves are not deleted, nor is the capture
data destroyed.
We would like to thank Florian Fernandez for providing the animation model,
setup, and MEL script on the CD that accompanies this book. You can visit his
web page at www.flo3d.com. We would also like to acknowledge Spectrum Stu-
dios for providing the motion capture data.
From the CD, load the scene file W a l k . m b . When you play the animation, which is driven by
mocap data, you should see a low-resolution model of a woman walking.
For most optical motion capture recording sessions, the action is captured at a frame
rate in the range of 60 to 120 frames per second. The reason for such high-speed capture is
to ensure that the markers do not blur. The technology behind tracking markers accurately
96 C H A P T E R 4 • Animation and Motion Capture—Working Together
depends on the software being able to find the center of the marker. If a marker blurs, it
becomes elongated, and the position of the center of the marker is less accurate, causing jit-
ters in the data. The mocap for these examples was captured at 60 frames per second, but
you'll need to change the frame rate to fit either the film (24 frames per second) or the NTSC
(30 frames per second) standard for playback. In this example, we'll set the frame rate to
video NTSC, which is 30 frames per second. You will also want to turn off the Auto Key fea-
ture and this can be done in the Preferences window also.
To adjust preferences, follow these steps:
1. From the Marking menu, choose Window Settings/Preferences Preferences to
open the Preferences dialog box:
2. From the Categories list, select Settings to open the Settings: General Application Prefer-
ences window. In the Working Units section, set the Time field to NTSC (30fps). Also,
make sure that Linear is set to Foot. At the bottom of the window, click the Save button.
3. Turn off Auto Key in the Keys window under Settings. Clear the Auto Keys check box.
The motion capture animation now ends at frame 191.
Notice in the workspace window the five yellow cubes on the model's body: two at her
wrists, two at her ankles, and one at her root. These are the offset controllers that we will
animate. All the cubes move in X, Y, and Z, but only the root cube moves and rotates. To
make the cubes easier to select for animation, follow these steps:
1. Select the yellow cube at the root, and make sure that nothing else is selected. From the
Marking menu, choose Display Component Display Selection Handles to display
a small selection handle above the geometry in the center of the cube. The selection
handle always appears on top of the geometry. When the selection handle is selected,
you are controlling the root offset controller. Dragging a selection box over the root
area selects only the selection handle of the root offset controller.
98 C H A P T E R 4 • Animation and Motion Capture—Working Together
2. Select the yellow cubes at the wrists and ankles. Choose Display Component Display
Selection Handles to display all the offset controllers. This makes selecting and dese-
lecting these nodes easier. Drag a selection box around the model's body. All the offset
controllers are selected, and none of the geometry or joints are selected. Figure 4.5 dis-
plays the selection handles for offset controllers.
Lock All Rotation and Scale Values for the Wrists and Ankles
To lock the rotation and scale values, follow these steps:
1. Select the offset controllers for the wrists and ankles. In the Channel box, click and
drag down to select the Rotate X, Rotate Y, Rotate Z, Scale X, Scale Y, and Scale Z
attributes. RM click and hold to open a pop-up menu. Select Lock Selected to lock the
attributes and gray them, as shown in Figure 4.6.
Since the root offset translates and rotates, we will only lock the scale attribute for that
node.
2. Lock the Scale attributes for the root offset controller.
3. Save this scene as O f f s e t . m b . You will use this scene for the examples that follow.
2. Select the model's root and wrist offset controllers. Set keys for these nodes at
frames 14, 35, 56, 78, 101, 122, 143, 165, and 187.
3. With the root and both wrist offset controllers selected, go to frame 25. Use
the Move tool to translate the model's upper body down so that her knees are
slightly bent. Set
a key.
4. To copy this key to frames 45, 68, 90, 111, 133, 153, and 176, first make
sure you are on frame 25 in the Time Slider. You can press the comma key to
jump forward to keys in the timeline, and you can press the period key to
jump backward. MM click anywhere on the Time Slider and hold. (MM
dragging in the timeline lets you change time without updating the scene.)
Drag your mouse to frame 45 and release. Set a key. You have copied frame
25 to frame 45. Continue to copy this key to the remaining frames.
Figure 4.6: Lock the attrib-
Play the animation. The model looks as if she has a heavy weight on her utes on the wrist and ankles
shoulders because of the bounce we created in her walk. You can adjust the f-curve for Rotate X, Y, Z and
of her bounce in the Graph Editor. Follow these steps: Scale X, y, Z. Also lock the
attributes for the root offset
1. Select the model's root and both wrist offset controllers. In the Marking controller for Scale X, Y, Z.
menu, choose Window Animation Editors Graph Editor to open the
Graph Editor.
2. Hold down the Ctrl key and select the Translate Y nodes for armOffsetL and armOff-
setR. Also Ctrl+select the Translate X node for the rootOffset node.
We set up this character with the XTranslation used for vertical motion on the
rootOffset, rather than YTranslation. The only time this appears is in the
rootOffset node.
3. In the Graph Editor, shown in Figure 4.7, you should see the f-curves representing the
model's root and right and left wrist offset nodes forming a sine-like wave between 0
values and negative values. LM click and drag a Selection box around all the keys that
appear in the Graph Editor in the negative value range. All the keys that appear in the
0 value range represent the model's offset controllers in the normal stance. All values
that appear in a negative value represent her offset controllers in the squat position.
4. With the negative value keys selected, select the Move tool, and then Shift+MM click
inside the Graph Editor to display an icon that consists of an arrow and a question mark.
This will constrain your movement of the selected keys to either a vertical or horizontal
movement within the Graph Editor. By moving the mouse cursor upward, you choose the
vertical constraint, and the keys move closer to or farther from the 0 value in the Graph
Editor. Shifting the negative value keys closer or farther from the 0 value makes the mod-
el's root and right and left wrist offset nodes drive her upper body up or down, which
affects the amount of bend in her knees. Adjust the keys according to your preference to
make the model look as if she is carrying a heavy burden on her back (see Figure 4.8).
When you're ready to move on, save this scene as H e a v y W a l k . m b , and load O f f s e t . m b .
100 C H A P T E R 4 • Animation and Motion Capture—Working Together
Figure 4.7: In the Graph Editor, select the keys that are in the negative value range.
Figure 4.8: Translate the model's offset controller keys closer to the 0 value to lessen her upper body
plunge (or move them farther from the 0 value to increase her drop).
Figure 4.9: Notice that at frame 50 the model's left foot sinks into the cube.
Figure 4.10: At frame 50, the model's left foot lands on the platform and does not sink through it.
The ankle is offset by a value of30 in Translate Y.
Now, let's make the bump affect the model's left foot when she places it down at about
frame 50 (see Figure 4.10). Follow these steps:
1. In the Perspective window, select the left ankle offset controller. Go to frame 30 in the
Time Slider and set a key. This is the first frame of animation for this offset.
2. Go to frame 36. In the Channel box, enter a value of 30 in the Translate Y field. Set a key.
102 C H A P T E R 4 • Animation and Motion Capture—Working Together
Play the animation. It looks as if the model raises her foot starting at frame 30 and
steps on the bump at frame 50. However, she continues to walk for the remainder of the ani-
mation with her left foot on an invisible elevated platform and her right foot at ground level.
Your animation should look like the movie b u m p L e f t L e g P l a t _ s i d e V i e w . m o v on the CD.
Let's bring our model's left foot down so that she appears to walk only on a bump and
not on a platform (see Figure 4.11). Follow these steps:
1. In the Time Slider, MM copy frame 36 to frame 72. Set a key. This ensures that the
model's foot is raised between frames 36 and 72, the duration of time that this foot is
on the bump.
2. In the Time Slider, MM copy frame 30 to frame 86. Set a key. At frame 86, the model's
left foot is back in the normal offset position.
Your animation should now look like this movie b u m p L e f t L e g _ s i d e V i e w . m o v on the CD.
Switch to the Side window. As you play the animation, notice that although the
model's left foot rises to step on the bump, her right foot does not elevate and appears to
move through the bump. Let's fix that (see Figure 4.12). Follow these steps:
1. In the Perspective window, select the right ankle offset controller and set keys on
frames 50 and 69 in the Time Slider.
2. Go to frame 56. In the Channel box, set a value of 60 in the Translate Y field. Set a key.
Your animation should now look like the b u m p L e g s O n l y _ s i d e V i e w . m o v movie on the CD.
Our model's feet are looking better, but her upper body needs to react. Let's adjust her
root (see Figure 4.13). Follow these steps:
1. Select the root offset controller. Set keys on frames 50 and 84 that define the duration
of time for her root animation.
Next, we will translate her root offset up.
Figure 4.11: After frame 86, the model's left foot is positioned back on ground level.
Cleaning Mocap 103
Figure 4.12: The model's right ankle is raised by a value of60 in Translate Y. Her foot no longer
intersects the geometry of the platform.
Figure 4.13: Translating the model's root offset controller affects her whole upper body.
Remember that on thls model, the rootOffset's Translate X value takes the place
of the Translate Y value. Therefore, when you translate the root offset controller
up and down, enter values In the Translate X field in the Channel box.
2. With the root offset controller still selected, go to frame 56 and set a value of 35 in the
Translate X field of the Channel box. Set a key.
104 C H A P T E R 4 • Animation and Motion Capture—Working Together
3. In the Time Slider, go to frame 68 and set a value of 9 in the Translate X field of the
Channel box. Set a key.
The keys you just set have lifted up the model's root. Now let's add a key that acts as
the follow-through for her body weight coming down.
4. Go to frame 75 and set a value of -18 in the Translate X field of the Channel box. Set a key.
Now let's work on making the model's hands move with the rest of her body. Follow
these steps:
1. In the Perspective window, select both the right and left wrist offset controllers. Set
keys on frames 40 and 82.
2. Go to frame 56. In the Channel box, set a value of 35 in the Translate Y field.
3. At frame 69, set a Translate Y value of 10, and at frame 76, set a Translate Y value of-13.
Play the animation. She walks and steps on top of a bump that wasn't there when the
action was captured.
Let's fine-tune the animation. You might have noticed that the model's left foot sinks
slightly when she steps on the bump. Let's fix that in the Graph Editor. Follow these steps:
1. Select the left ankle offset controller. Open the Graph Editor, and select the Translate Y
value under the legOffsetL node. You will see the Y translation curve. If you do not see
the f-curve in its entirety, in the Graph Editor choose View Frame All, or for a short-
cut press the A keyboard key.
2. LM drag a selection box around the keys you placed at frames 36 and 72. Both keys
are selected, and the selection handles appear for each selected key (see Figure 4.14).
3. In the Graph Editor, choose Tangents Flat from the drop-down menu (see Figure 4.15).
For a final touch, add rotation to the model's hips (see Figure 4.16). Follow these steps:
1. Select the root offset controller, and go to frame 56 in the Time Slider. Select the Rotate
tool, and rotate the root offset controller so that the model's left hip is higher than her
right hip. Set a key.
2. Go to frame 75, and rotate the root offset controller so that the model's right hip is
higher than her left (see Figure 4.17). Set a key.
3. Save this file as B u m p . m b .
Figure 4.14: In the Graph Editor, select frames 3 6 and 72 on the legOffsetL Translate Y curve.
Cleaning Mocap 105
Figure 4.15: Flattening the tangents creates a plateau in the curve, which stabilizes the model's foot
and places it firmly on the ground.
figure 4.16: Creating hip rotation. The more the hip rotates, the more swagger and attitude.
To get a better view of your animation, hide the mocapSkeleton, offsetControls, and
offsetSkeleton in the Layer Editor. You should see only the bindSkeleton layer. With
these settings, there is no additional information on the character to distract your
eye when viewing her animation.
An offset controller is a subtle but powerful animation tool when working with
mocap. You can use it to animate small changes in the motion capture character in a speedy
106 C H A P T E R 4 • Animation and Motion Capture—Working Together
Figure 4.17: Rotate the hips so that the right hip is higher than the left.
and efficient manner. If you are not happy with the results, you can delete keys from the off-
set controllers without affecting the mocap data.
In this exercise, we will again work with our model that walks with poise and attitude. This
shot would be perfect if not for one small detail: the director wants the model to blow a kiss
and wave to her fans. In the past, this would have been a time-consuming endeavor because
motion capture data has keys on every frame. You would have had to eliminate and reduce
keys and then key the rotation of the joints in her shoulder, arm, and hand.
However, with the Trax Editor, you can create an overlay animation of the model rais-
ing her arm and waving without destroying the integrity of the original motion capture. The
Trax Editor is a nonlinear animation sequencer that you can use to layer and blend clips
from animated elements and overlap them to create new motions. The Trax Editor can be
quite useful when you want to layer keyframed animation over motion capture data.
To start, load the scene file that you saved earlier in this chapter, O f f s e t . m b .
Cleaning Mocap 107
Figure 4.18: Expand the mocapSkel node. You will be setting rotation values in the Channel box for
the R_collar, R_shoulder, R_elbow, and R_wrist joints.
Cleaning Mocap 109
3. With the R_collar, R_shoulder, R_elbow, and R_wrist joint nodes still highlighted, in
the Channel box select Translate X, Translate Y, and Translate Z. RM click on top of
the selection to display the Channels dialog box. Scroll down and select Lock Selected.
The numeric values for Translate X, Translate Y, and Translate Z are shaded and are
now locked.
4. In the Channel box, select Scale X, Scale Y, Scale Z, and Visibility. Select Lock Selected
for these values also. The numeric values for Scale X, Scale Y, Scale Z, and Visibility
are shaded and locked.
figure 4.19: Set an in and out range for the R_collar and R_elbow joints.
Figure 4.20: Set an in and out range for the R_shoulder and R_wrist joints.
Cleaning Mocap 111
on the CD. Scrub the Timeline or play the animation to see the model raise her arm, blow a
kiss, and wave.
1. In the Trax Editor, select the kissNwave clip. Notice that the clip has numbers on both
the head and tail, designating the start frame and the end frame. On the left side of the
kissNwave clip is the number 0; on the right side is the number 132.
2. LM click the kissNwave clip, and slide it until the frame start reads 45 and the frame
end is 177.
Scrub the Timeline. Now the animation doesn't start with the model blowing a kiss.
Instead, she takes a few steps and then raises her arm. There seems to be a little more atti-
tude in her behavior since we shifted her kiss and wave until after her entrance.
Figure 4.22: Shift the kissNwave clip on the Timeline until you are happy with the timing of the
model's wave. Use the Graph Editor to tweak f-curves.
114 C H A P T E R 4 • Animation and Motion Capture—Working Together
skeleton driven by capture data. The other technique involves disabling the motion
capture clip in order to animate on a static character. Using this technique, you isolate your
animation to evaluate its strength on its own merit by turning off the primary body move-
ment. Follow these steps:
1. In the Timeline, go to frame 0.
2. Open the Trax Editor, and RM click and hold the sexyWalk clip, to display a pop-up
menu. Clear the Enable Clip check box and the kissNwave Clip check box.
Scrub the Timeline. Result: the motion capture clip sexyWalk and animation clip kiss-
Nwave are disabled, and therefore the skeleton does not move. You will not lose this motion;
it is simply turned off for now.
3. Set keys on the R_collar, R_shoulder, R_elbow, and R_wrist joint nodes. You can create
your own wave or use the keys in Tables 4.1, 4.2, 4.3, and 4.4, earlier in this chapter.
4. In the Outliner, select the R_collar, R_shoulder, R_elbow, and R_wrist joint nodes.
Select F2, choose Animate Create Clip and enter kissNwaveRelative in the
Name field. Click the Create Clip button.
Scrub the Timeline. You should see the arm animating while the rest of the model's
body stays static.
Enable sexyWalk
To turn on the model's motion capture, we need to enable that clip. Follow these steps:
1. In the Trax Editor, RM click sexyWalk and check Enable Clip to activate the motion
capture movement.
Scrub the Timeline. Something is not right. The model's arm is waving erratically and is
not in the positions that we set. We didn't key her arm at the origin, and it needs to be rela-
tive to the motion capture clip.
2. In the Trax Editor, RM click kissNwaveRelative and check Relative Clip.
Scrub the Timeline. Our model is now walking and waving. You can now disable the
sexyWalk clip at any frame in the Timeline and analyze the kissNwaveRelative clip moving
independently of the motion capture clip.
Because we created the animation of the arm with the sexyWalk clip disabled, we can
now enable or disable sexyWalk at any frame in the Timeline and analyze the kissNwaveRel-
ative clip moving independent of the motion capture clip.
Adjust f-curves accordingly with the sexyWalk clip enabled or disabled.
In this example, we'll combine the mocap that we already have with keyframing, using the
simplified hybrid setup you've been using from the CD. The model will walk along and inad-
vertently stumble over a box. Since the mocap does not include her tripping, we must keyframe
this motion on the animation skeleton, blending it with the mocap skeleton's motion.
This particular skeleton has only basic animation controls for purposes of
demonstration. A production skeleton should have several more options such as
forward kinematic controls on the arms and legs, as well as inverse kinematic
controls.
To source the MEL script, open the Script Editor, and choose File Source
Script. Browse to the location of the S n a p . m e l script on the CD. Select it, and
click Open.
2. Choose File Save Selected to Shelf, and enter a name such as SELECT.
3. Click an empty space in your workspace to deselect everything.
Now, let's bring back some of the layers you've hidden, such as the bindSkeleton and
the mocapSkeleton. You can use the offset skeleton and offset controls as well later, if you
like, but for now, let's keep them hidden to reduce screen clutter.
Let's start animating. Select the Control box, and key the Mocap_keyframe attribute in
the Channel box to a value of 0.
Summary 11 7
Put the bindSkeleton layer in reference mode. This will make it easier to select the
animation Control boxes.
Summary
In this chapter, we covered the basics of motion capture and discussed methods of working
with it. We manipulate the mocap using offsets and the Trax Editor, as well as by adding
keyframes to a hybrid animation model to blend traditional key framing with motion cap-
ture. You can combine animation and motion capture in many ways, so have fun exploring
and try different solutions.
Lip-SynchingReal-WorldProjects
John Kundert-Gibbs, Dariush Derakhshani, and Rebecca Johnson
Preproduction Tasks
As with all animation work, the first step is preproduction. In preproduction, you create the
storyline, script (obviously an important step when lip-synching is involved) the look and
feel of the animation, and create storyboards that describe the action. If you are working on
a series of similar animations (for example, a Saturday-morning cartoon series), you will
likely have a "bible" that contains the general art direction, character descriptions, and pos-
sibly technical material.
In addition to creating the general look of your models and scenes and creating the dia-
log, in the preproduction phase you might also define general issues of how your character(s)
will speak. For example, you might decide how realistic the lip-synching needs to be and
determine your general methodology. A little forethought at this stage can save valuable time
later. For example, if the characters are extremely cartoonish and stylized, you might need
only simple "open/closed" positions for their mouths. On the other hand, if the animation
requires extreme realism, now is the time to face this challenge head-on and be sure you have
enough resources to tackle this task.
Never overanimate your characters, lf they need only simple mouth shapes, don't
build a complex and/or hard-to-use system for lip-synching. You are only wasting
your—and your company's—money.
problem areas, all of which leads to wasted time and money and a lot of frustration that can
be avoided by creating or renting an appropriate facility in the first place.
If you have the budget, by all means rent a recording studio for the time you need it. A
good rule of thumb is that you need about an hour of recording time per minute of finished
dialog. If you can't afford to rent a studio, see if you can beg facilities from somewhere close
by. Often—especially for nonprofit projects—managers of facilities will allow recording ses-
sions for little or no charge. The one problem that often arises in these circumstances is that
the recording session has to take place at odd hours, which can be stressful for cast and crew.
If you must construct your own recording space, try to keep the following points in mind:
• Find the best microphone you can, and never, ever use the built-in microphone on a
camera or a camcorder.
• Find the acoustically deadest space you can. An anechoic chamber is best; otherwise,
use heavy drapes, foam, or other sound-deadening materials to reduce echo.
• Remember that floors and ceilings create echo, so lay out blankets, carpeting, or other
materials on them.
• Listen carefully to your space or do a test recording. Listen for any kinds of hums or
"leaking" sound from the outside world. Anything from fluorescent light fixtures to air-
conditioning can cause a low level of noise in your recording that is difficult to delete.
• If possible, have only your voice actors and the microphone(s) in the room. The fewer
machines (recording devices, cameras, computers) in the room, the less noise you will get.
• Never try to create an effect when recording. For example, don't try to capture a natu-
ral echo if your characters will be in a cave. With the audio-engineering software avail-
able today, it's extremely easy to add this type of effect, but almost impossible to get rid
of an incorrect echo or other effect after it's recorded.
The best sound for recording is completely flat and noiseless, save for your actors'
voices. Be sure to test your actual actors before you do your final recordings. Often actors'
voices—especially stage actors' voices—have an extreme dynamic range, which can cause a
poorly adjusted recording setup to clip. It's much better to find this out in a trial recording
than after your actors have gone home for the evening!
To find voice actors for your dialog tracks, first decide what your voices will sound
like. Next, either hire professional voice talent (if your budget allows) or go scouting for
amateur actors who are willing to work for the exposure. Someone who is familiar with
local community or college theater programs can find you good talent quickly. If you have
the good fortune to be able to audition your actors, listen to what they bring to a particular
role. Often a well-trained actor gives you a more interesting reading than what you had in
mind. You just have to be able to hear that different is better, not worse.
A point of some debate is whether to rehearse your cast before doing voice-over
recordings. We believe a short rehearsal just before recording is beneficial because it helps
actors get into the flow of the scene. Others argue, however, that this rehearsal reduces the
spontaneity of the recording. Regardless of whether you rehearse, finding a good director
can help get the best out of your actors.
During the recording process itself, continue to listen to your actor(s). They will often
come up with marvelous spur-of-the-moment line readings that can make a dull line into a
memorable or humorous one. One of the best techniques for getting a range of line readings
out of an actor is to have them read a line three or four times in a row with just a slight pause
between readings. Often this repetition helps them loosen up and try readings that they
wouldn't necessarily have tried had they had more time to rehearse. Again, the audio software
available today makes creating a dialog track from many individual takes quite easy.
122 C H A P T E R 5 • Lip-Synching Real-World Projects
The human mouth is made up of a number of muscles laid out In concentric rings
around the lips, allowing us our large range of motion and expression In this area.
The most straightforward way to model this muscle structure is to model the mouth
area as a series of rlngs moving away from the lips. When you then pull on faces or
control vertices, the mouth area behaves as if there were a more-or-less circular
muscle structure beneath it.
Usually a smooth bind is best for skinning the jaw area. You then need to manipulate
the weights so that the lower jaw is completely affected by the jawbone while the
upper jaw is completely unaffected. Fading the influence of the lower jawbone in the
lower cheek area produces a soft transition from the low jaw to the rest of the head.
In more complex cases, you can build an entire "muscular" system in spiderlike fashion
out of bones surrounding the lip area. As each "leg" pulls on an area of the lip, the affected
skin of the model is distorted along with the bone. Although this complex bone structure can
create amazingly subtle effects, it is fairly complex to set up and use and, with the advent of
blend shapes, is not used as frequently as it once was.
To rig a character for blend shapes, you make multiple copies of the default mouth
area—which is modeled in an expressionless neutral pose—and deform the vertices of the
mouth to create various shapes, which become the target shapes that the neutral head will
blend to when you animate the mouth. Rigging for blend shapes, then, is creating a library of
mouth shapes that you can select, either by themselves or in combination, to create the final
mouth shape at each moment of the final animation. For simple characters, the mouth library
can be fairly small. Smile, frown, Ah (open), M (closed), and E (half open and stretched)
124 C H A P T E R 5 • Lip-Synching Real-World Projects
Once you create your blend shapes, you will probably want to hide the target mod-
els in order to clean up your work area.
One useful aspect of the blend shape method in Maya is that you can use base blend
shape targets to create "meta" blend shape targets.
For example, say you want to create the mouth shape
for whistling, and you already have target models for
closed pursed lips and for the O sound. Rather than
create a new target model from scratch for the whistle,
you can use the Blend Shape window to combine the
O and pursed mouths and then save this new shape as
a blend shape target by clicking the Add button in the
Blend Shape window.
In Figure 5.4, the third blend shape (whistle) was
created from a combination of O and pursed shapes
given by the slider positions. Creating blend shapes
from other blend shapes is a powerful time-saver dur- Figure 5.4: The Blend Shape window
ing the rigging process. You can create five to ten basic shows the whistle mouth shape cre-
mouth shapes and then produce the actual phonemes ated by combining the O and pursed
(Ah, O, E, M, K, and so on) by combining these basic mouth shapes.
mouth shapes.
frame.) Although this method may be fine if you only need to lip-synch a few seconds of
speech, a much better method for longer stretches of dialog is to use Maya's character and
Trax Editor features to create clips for various words and mouth shapes.
To load sound Into your Maya scene, save your dialog track as a . w a v file,
choose Flle Import, and browse to the sound file. Once the file is imported,
you can see the shape of the sound file in the Timeline by RM clicking the Time-
line and choosing Sound <name of sound> from the shortcut menu.
We will look at the Trax Editor more in detail in the first hands-on example later in this
chapter, but the general method is to create a character (or subcharacter) for the mouth. The
blend shape node is to be added to this character with the envelope and shape names selected
in the Channel box and with the From Channel Box option selected in the Create Character
options. This last step ensures that only these attributes will be keyed with the character,
removing unnecessary keys during animation.
Once the blend shape is a character, take the script and start creating the words the
character speaks to form a library of character clips. Because the Trax Editor allows for scal-
ing of words, you do not need, at this point, to match the words with any given timing; so in
general you select a standard length (5 or 10
frames) and make all words last that long by
default. During the actual synching process,
you can adjust this timing, as well as the size
of the mouth for the word, to fit the way the
word is actually spoken.
To create a clip, simply keyframe the
series of mouth shapes for any given word or
reaction, choose Animate Create Clip
and give the clip the name of the word you
just created, as shown in Figure 5.5. Once
you animate an oft-used word for your char-
acter, you can save it as a clip and then load Figure 5.5: Creating a clip for the word
it for future use. "goodbye"
It is a good idea to create a neutral mouth shape and "breath" shape in addition to
your words. You'll use the neutral mouth shape when the character is resting
between sentences, and you'll use the "breath" shape when the character is getting
ready to speak again.
If you have lots of dialog or more than one character speaking in your piece, two dis-
tinct advantages will accrue from this method in addition to the ease of placing words where
they need to be in a more intuitive manner (see the next section):
• Any repeated words ("the" is often repeated multiple times, for example) have to be
keyframed only once, because you can reuse the same source clip as many times as needed.
• You can easily share a library of words between characters. As long as they have the
same blend shape targets in the same order — even if the blend shapes look different
126 C H A P T E R 5 • Lip-Synching Real-World Projects
from one character to the other—you can transfer clips from one to the other, saving
even more setup time.
Thus, you can incrementally grow your word library (again, assuming all your characters have
the same targets in the same order) over time on a single project or even multiple projects.
For more information on transferring clips from one character to another, see
Mastering Maya3, by Peter Lee and John L. Kundert-Gibbs (2001,Sybex, Inc.).
Words are stored in the Visor under Animation: Character Clips and Poses:
Clips.
When the words are basically in place, select each word (double-click it in the Trax
Editor) and use the Channel box to adjust its length and scale so that it better fits the spoken
word. If you decide a particular word is not good enough, you can open the word's
keyframes in the Graph Editor
(choose View Graph Anim
Curves in the Trax Editor menu)
and adjust them, as in Figure 5.6.
After laying in, scaling, and
weighting your words, add a blend
between them if you like (select
two clips and RM choose Blend
Clips) and test your animation
using a quick playblast! All the
work on the front end creating
character clips pays off here: the
animation process itself is much
faster than keyframing each word
in place as you go. Figure 5.7
Figure 5.6: Adjusting the animation curves for a word in shows the Trax Editor and clips for
the Graph Editor a famous sentence.
"steal" the way their mouth looks for any portion of the dialog, imbuing your animation
work with just that much more realism and personality.
to open different files. In some cases, the file will show you the finished portion of the process.
Use these files as examples and continue working in either the M c b l e n d s h a p e s . m b file or the
M c h e a d o n l y . m b file. Don't forget to save your work along the way!
Detaching Surfaces
When a person speaks, the mouth, nose, eyes, cheeks, and forehead all move in relationship
to one another. To provide easier (and separate) control over these movements, we'll split the
head into three sections. We'll then create blend shapes that utilize the separate areas. Once
all the blend shapes are created, we'll put MC back together again! The blend shapes will be
combined to move all areas of the face so that the animation doesn't look stilted. Open
M c h e a d o n l y . m b on the CD-ROM, and look at how MC is naturally divided by his isoparms.
For our purposes, we want section one to contain the mouth, chin, and cheeks. Section two
will consist of the eyes and the forehead. Section three will be the back of the head.
When slicing the character, consider which groups of facial movements occur
together, lf you are working with a more realistic model, you divide the face dif-
ferently than we are with MC.
Figure 5.9: The selected isoparm illustrates how the face Figure 5.10: The isoparm signifies where the surface will
will be cut into sections. be detached.
Figure 5.11 : MC's mouth, chin, and Figure 5.12: MC's eyes and forehead Figure 5.13: The back of MC's head
cheeks
Blend shapes are a strong lip-synching tool. Keep In mind that they are also use-
ful In a variety of other animation techniques.
example, if you are lip-synching the word "the," animate only "th." Figure 5.14 shows some
helpful blend shapes. Depending on the complexity of your character, you may need more or
fewer shapes.
With these basic pointers in mind, you are ready to begin creating expressions for MC!
Follow these steps:
1. Select the m o u t h _ p a t c h and press Ctrl+d to duplicate.
5 Each time you create a shape, move the new shape to a clean area of the work-
space. Otherwise, all the shapes will be placed directly on top of one another.
Also, do not freeze any of the transformations. If you do, the blend shapes will
not work.
The first shape you selected is the target object. The second Is the base shape. In
general, all shapes up to the last one selected are target shapes. The last shape you
select is the base shape.
Because you will be creating a variety of shapes, remember to rename your shapes
so that you can keep track of the sliders when you are keyframing. Don't forget to
delete the history after duplicating an object.
Figure 5.16: Target shapes allow for easier control over blend shapes.
5. Repeat the process of creating blend and target shapes for the mouth shapes suggested
in Figure 5.16.
When creating facial expressions, you might experience unexpected results if you
add or remove CVs.
When creating blend shapes from a group of objects, it is important that the target
object and the base object be grouped in the same order and have the same group
members.
138 C H A P T E R 5 • Lip-Synching Real-World Projects
Again, we'll manipulate the CVs into different poses. You will want to create the
shapes shown in Figure 5.17. Pose A shows surprise, excitement, and enthusiasm. Both eye-
brows are raised, and there are wrinkles in the forehead. The eyes are slightly more open.
Poses B and C show questioning emphasis and surprise by raising one eyebrow. Pose D illus-
trates anger and disgust. There are wrinkles on the bridge of the nose and forehead, and the
eyebrows are slanted inward.
Figure 5.18: Select similar CVs to move the left eyebrow Figure 5.19: Select these CVs to make the eyes widen in
and forehead upward in surprise. surprise.
8. Move the CVs along the bridge of the nose and the center of the forehead forward and
up to create wrinkles.
When creating shapes, avoid moving the CVs on the outermost isoparm of the
patch. Otherwise, you will have unexpected wrinkles in the blend shapes.
9. Select CVs on the top half of the eye and eyelid and move them upward, to widen the
eyes, as shown in Figure 5.19.
Be careful to choose only the CVs you want to move. It is easy to inadvertently select
CVs
10. Manipulate the CVs until you are satisfied with the shape, and then choose Edit
Delete by Type History.
Change the envelope attribute to -0.50 and move the slider again. Notice that
the eyebrows can now move down in anger or up in concern. The slider range
below zero makes the eyebrows move downward, and the range greater than
zero makes the eyebrows move upward.
Repeat steps 1 through 7 to create other facial expressions. As with the mouth blend
shape created earlier, use the u p p e r _ f a c e blend shape to hold multiple targets.
Figure 5.20: Select these isoparms to reattach the surface. Figure 5.21: The three sections of the head are reattached
to form one shape.
Figure 5.22: A happy face and the sliders used to create the form
Creating a Character
A character allows you to key multiple objects as a single entity, which is exactly what we need
to do when lip-synching. When assigning members to a character set, you determine which
attributes from the objects can be keyed. Creating a character also allows you to create clips
and reduce animation time through nonlinear animation techniques. Once you create a char-
acter you can add subcharacters to the character set. Characters let you animate quickly and
efficiently while keying only the necessary attributes. Creating characters helps to organize the
clips and lets you easily see the keys for each frame. Pay attention when setting your character.
You will notice some crazy, not to mention strange, results if you forget to set the character.
To create a character, follow these steps:
1. Be sure that nothing is selected and that you are in the Animation menu.
2. Choose Character Create Character Set
3. Name the character MC, and set Character Set attributes to From Channel Box.
Once the character is created, you need to assign the attributes that can be keyed.
4. In the mouth Blend Shape
window, click the Select
button.
5. In the Channel box, highlight
the shapes you want to
include in the character, as
shown in Figure 5.23.
6. From the main menu, choose
Character Add to Charac-
ter Set.
7. For each blend shape, follow
steps 4 through 6, adding the
characteristics you want to
animate to the character set.
Be sure to include all the
shapes you need for lip-
synching. Figure 5.23: Attributes to include in the character set
Just as we created the blend shapes in sections, it is also easier to animate in sections.
Concentrate on the mouth movement first. When you are pleased with your animation, add
the eyes and forehead. Work in steps so that the task does not become overwhelming. We
will have separate clips for the mouth (word) and eye (emotion) shapes. They can be
arranged to coincide in the Timeline at a later point.
Here are some guidelines to follow as you create the library:
• Use a mirror and watch yourself speak. Notice how your mouth moves, the shapes it
forms as you say specific sounds, and the amount of movement.
• When you are lip-synching, be a minimalist. Listen to the words and determine the
most prominent sounds. Animate the word using the shapes that make up the main
sounds only. Watch your animation, and you will be surprised at the results. Your
mind will fill in the remaining shapes.
• Create playblasts to monitor the movement. You will soon discover that it is much eas-
ier to fill in gaps in the lip-synching than to remove unnecessary animation. The rate of
speech influences how much detail you place on each word: the faster a character
speaks, the less movement you need to give to an individual sound. The phonemes
influence the character's movements before and after they are spoken. Consider the
timing of each movement. If a character suddenly opens its mouth or pronounces a
long vowel, you may find it necessary to tone down the movement. The mouth will
appear as if it is randomly moving out of control if you do not allow enough frames for
the motion to take place.
Creating a library lets you use the Trax Editor to edit a clip as necessary each time
the word is used.
the mouth will appear to be moving rapidly out of control. Even if the clip looks fine, when
you scrub through the animation, it will look over done when the words are joined together.
Keep in mind that 30 frames equals 1 second, and sometimes people speak more than one
word per second. Minimalization is the key to successful lip-synching. A clip can consist of
only one or two keyframes. For example, the words "zay" and "are" have only one keyframe
that is scaled to last the length of the word. Each clip will begin at frame 1 and can be moved
to a new position later .
Follow these steps:
1. Open M C 1 a s t . m b from the CD-ROM to see a finished version of the clips and final ani-
mation. Open M c r e a d y t o t a l k . m b to begin adding clips, or you can continue working in
your own file.
2. Set the character to MC.
3. Listen to the M c m a g n i f q u e . w a v file on the CD-ROM to hear the voice you will be lip-
synching. Try to picture the shapes that you will need.
4. Open the Blend Shape window (choose Window Animation Editors Blend Shape.
Be sure that every blend shape you want to use Is assigned to the character set
before you begin setting keyframes.
5. Move to frame 1 and create the zay clip by moving the slider within the mouth blend
shape (remember you can also use a combination of sliders) until you have a shape
similar to that in Figure 5.24.
6. Click the Key All button on each of the blend shapes you moved to create the shape.
This clip will only contain one keyframe.
7. Switch to the Animation menu.
8. Choose Animate Create Clip
9. Name the clip zay.
10. Click the Put Clip in Visor Only check box, and create the clip.
11. Create the "are" clip by following steps 1 through 10.
12. Name each of the clips for the word you are animating. Figure 5.25 shows the "r"
sound that MC says next.
Figure 5.24: Combing the results of the ah slider with a lit- Figure 5.25: Slider positions used to create an "r" shape
tle of the ee slider will form a long a shape.
Hands-on Example 1: Lip-Synching Using Blend Shapes and the Trax Editor 145
13. Follow steps 4 through 12 to create keyframes for the word "magnifique," as shown in
Figure 5.26. You will need four keyframes. Start at frame 1, and leave one to three
spaces between each keyframe. You can leave more spaces if the position of the mouth
is moving from a mostly open to closed position or vice versa.
When setting keyframes for the words, allow one frame between consonant sounds
and two or more frames on both sides of long vowel sounds. Later in this chapter,
we'll scale the clips to match the length of each word.
14. Create a clip named c l o s e d , in which the mouth is closed. This clip should only be one
keyframe.
You might want to create a few different closed clips to add variety to the animation.
For example, you could create a closed clip, a smile, a frown, and a barely open clip, as
shown in Figure 5.27. Don't forget you can use the happy face blend shape you created ear-
lier as a starting or stopping point for the sentence.
To Import sound, choose File Import and browse to the sound file you want to
reference. In this case, choose M C m a g n i f i q u e . w a v from the CD-ROM.
Hands-on Example 1: Lip-Synching Using Blend Shapes and the Trax Editor 147
2. Place the following clips in the Trax Editor: closed mouth, zay, are, magnifique, and a
second closed mouth.
3. Place the closed mouth clip four frames before the character should begin to speak.
MM scrub over the Timeline to hear the sound track.This will help you coordinate
the sound and the movements.
4. Place the zay clip on the frame with the first green sound line in the Time Slider.
5. Align the other clips so that they begin a frame or two before the matching word is spoken.
Scaling Clips
Scaling a clip is a great way to extend the use of your animation work. When words are
repeated, they can be spoken at different rates of speed. If the character is excited, it might
speak faster than normal; if the character is distracted, its speech patterns might slow down.
Scaling a clip lets you use the same clip each time a word is spoken, even if the speed or
emphasis of the word is different. To scale a clip, mouse over to the lower end of your clip.
A straight arrow with a line appears, indicating that you can now drag to scale the clip (see
Figure5.28).
Cycling a clip lets you repeat an animation as many times as the script calls for. Cycling
a clip is especially useful when someone is laughing. You can set the cycle to Relative to add
a little variety to the motion, or you can set the cycle to Absolute to begin and end the clip
exactly the same each time a cycle completes. To cycle a clip, mouse over the top end of your
clip. A curved arrow will appear. Drag the clip to the desired ending position. A black tick
mark denotes a complete cycle. You can scale and cycle a clip more than once, giving your-
self a wider variety of results from the same piece of animation.
To scale a clip, follow these steps:
1. Scale the clip until it matches the length of the word. You can scrub through the Time-
line to hear when each sound is said and when words begin and end. This should be all
the scaling you need to worry about for the words zay and are.
2. Right-click the magnifique clip and choose Activate Keys to return your keys to the
Time Slider for editing.
3. The keyframes begin with frame 2. To simplify the scaling process, Shift+click frame 1
in the Time Slider. Drag the mouse until a red bar covers the last key. (Keyframes are
denoted by red bars in the Time Slider.)
4. Click the middle two arrows in the red bar, and drag the keys to the frame before MC
begins to say "magnifique."
Now you can continue to edit the keys in the Timeline so that the movements are
scaled to match the sounds.
5. Scrub through the Timeline and listen to magnifique. Shift+click and highlight the keys
you need to position, and move them to the appropriate position in the Time Slider.
Match the placement of the keys in the Timeline (or Graph Editor) with the empha-
sized portions of the words.
148 C H A P T E R 5 • Lip-Synching Real-World Projects
6. Return to the Trax Editor window and right-click the magnifique clip. Choose Activate
Keys to remove your keys from the Time Slider, which allows you to use the mag-
nifique clip. In this way, the shapes are formed as the words are spoken. Once you are
familiar with scaling keys, this portion of the process will move quickly.
Since the sentence is spoken without a pause, blending the clips provides a smooth
transition between words. Notice that the mouth does not close entirely after every word.
Use the closed mouth shapes at the beginning or end of a phrase. If there is a long pause,
consider whether the character is happy or sad and what needs to be portrayed during a
pause. You might want to blend the nonverbal clips with the word clips.
7. Select the zay clip and Shift+select the are clip.
8. In the Trax Editor, choose Create Blend.
9. Create blends between the remaining clips.
Scrub through the animation. If the character's movements are overemphasized, make
the correction in the Trax Editor. The weight attribute determines how much movement
appears in the animation. To reduce the effect of a blend, lower the weight. If you want to
exaggerate the motions, increase the weight. The script might call for the same word to be
whispered in one scene and screamed in another, or perhaps sometimes the words are just
overexaggerated. The weight of the clip influences the extent of action in each clip.
Select the clip, and in the Channel box, set the weight to 0.2. Now, scrub through the
animation. Change the weight back to 1, and notice the results. The weight attribute is use-
ful for conveying emphasis and emotion. When you play back the animation, you will dis-
cover that you need to reduce the weight of each clip. Experiment with values from zero to 1
until you are happy with the results.
Alternatively, you can drag+select all your Trax clips and move them two frames
or so to the right in the Trax Editor.
Tweak the clips until you are pleased with the lip-synching, as shown in Figure 5.29
(also see a rendered sequence of MC speaking in the file M C m a g n i f i q u e . m o v on the CD-ROM).
If the mouth looks as if it is moving wildly and is not in synch with the sound, you probably
tried to create too many sounds for a word, or you need to reduce the weight assigned to the
clip. Remember to lip-synch phonetically, and not according to the actual letters.
Now that you are pleased with the words, add some movement to the upper portion of
the face. You might want to set the character to MC and simply create one clip of movement
that covers the entire sentence, by setting keyframes throughout the Timeline each time you
want movement. Again, do not overdo the motions of the forehead and the eyes. Say the
Hands-on Example 2: Photo-Real Facial Replacement and Lip-Synching 149
sentence while looking in a mirror. Notice when you move the upper portion of your face,
and re-create those movements using the blend shapes you created. Don't forget to add
movement to the character's head. Remember, a person's head is not still as they speak! Keep
in mind that lip-synching requires an immense amount of patience and attention to details.
The minor quirks or subtle movements in an animation separate the good characters from
the great ones.
For the most successful motion track, track the length of the shot in segments of
movement broken into directions. For example, a segment might start at a slight
tilt of the head up and end when the head stops moving up or changes direction
while moving up.
Figure 5.30: A simple joint structure to animate the jaw Figure 5.31: Group the gums and teeth under the joint sys-
movements tem, but don't bind them.
152 C H A P T E R 5 • Lip-Synching Real-World Projects
The only thing that remains is to attach the gum geometry to the inside of the lips. By
selecting isoparms on the outside edge of the gums and the inside edge of the lips, we can
choose Edit Curves Duplicate Surface Curve with history turned on to make lofts that
extend between the two geometries
that will deform and fit as the
mouth shapes change, as shown in
Figure 5.32.
Now it's time to bind the
geometry to the bones. If you have
already grouped the gums and teeth
under the joints, go ahead and tem-
porarily ungroup them from their
joints. Once we bind the snout to
the joint system, we'll regroup the
gums and teeth back to their
respective joints so that they move
properly when the jaw is rotated.
We're doing this because it makes
no sense to skin them to the joints Figure 5.32: Loft surfaces between the gums and the
along with the rest of the snout. model's lips, and keep the history.
Select the snout, select the
root joint, and then choose Bind Skin
Smooth Bind. This attaches our geometry to
the joint structure fairly well, but we need to
paint some weights to make it perfect. In
Shaded mode (click 5 in the persp window),
select the snout geometry, and choose Skin
Edit Smooth Skin Paint Skin Weights
Tool (see Figure 5.33).
Choose a round feathered brush, and
set a good upper limit radius for your brush
shape. In the Influence section of the Tool
Settings window, select the very bottom joint
of the lower jaw and make sure that the
entire snout geometry is painted black. You
want no influence coming from the bottom
level joint. Do the same for the bottom joint
of the upper jaw.
any unnecessary movement toward the back of the snout, as that will have to blend into the
live action plate.
On more thorough projects when more than the snout is being replaced, however, it
would be wise to paint some weight farther down the jaw to better mimic real movement in
muscle and skin. Because this exercise only calls for the jaw joints, and subsequently the
painted weights, to make general jaw movement, the weights painting should be relatively
straightforward. Figure 5.34 shows the painted weights of the four major bones in the skele-
tal chain. Notice how only three joints truly control the model: root joint, upper jaw joint,
and lower jaw joint.
When you've painted all the weights, grab a frosty beverage, put your feet up, rest your
wrist, and check out what's cooking on TV. A relaxed animator is a happy animator.
Texturing Setup
At this point, before we set up our shading, we're going to duplicate (without history) the
snout geometry once. Make sure your joints are at the bind pose, and if not, select the root
and choose Skin Go to Bind Pose.
Figure 5.34: View of the four major bones and their painted weights.
154 C H A P T E R 5 m Lip-Synching Real-World Projects
if you are not at the bind pose before you duplicate your snout for the blend
shapes, your blend shapes will not work later in the procedure. It is vital to be at
the bind pose before you copy the snout. Otherwise, the deformation chains will
be out of order, and your blend shapes will double transform when you animate
the blends.
Select the snout geometry and duplicate it (again, without input connections turned on
or upstream connections). This will be the base for your blend shapes. Keep it out of the
way, hidden or templated, for now. Later we will duplicate that head 11 times and lay out all
the heads in a nice grid. Those will be our blend shapes. The file R o x _ F i n a l _ R e a d y _ f o r _ T e x -
t u r e . m b in the s c e n e s folder of the R o x _ l i p _ s y n c project will catch you up to this point. It
has the snout model, inner mouth, bound skeleton with painted weights, and one copy of the
blend shapes ready for you. Just add texture, one cup of boiling water, and stir!
CD) and assign your shader to it. If we were to leave it there, any deformations on this object
would make the texture swim on the object, and the effect would not work. We need to
make the projected texture "stick" to the object. For that, we'll create a texture reference
object, which will make another copy of the snout that Maya will use to reference the tex-
ture information to map to the actual rendered snout.
Select the snout, and in the Rendering menu, choose Texturing Create Texture Ref-
erence Object to make a templated copy of the snout at the exact position of the snout.
Now, duplicate the joint structure (preferably without the tongue joints attached if you
decided to create them). We'll be attaching that copy to the templated texture reference
object with a smooth bind. Now select the renderable snout, select the texture reference
object, and in the Animation menu, choose Skin Edit Smooth Skin Copy Skin Weights
to duplicate the same skin weights from one snout to the other. We want both snouts to
move precisely together when we line them up with the real cat. And indeed, when we track
the built snout to a live moving subject.
So, to summarize, we'll position and track both the renderable snout and the texture
reference object at the same time by manipulating both skeletons. Once we position the
snout and the texture reference object using both joints to match the picture of my cat in
your persp window (see Figure 5.35 in the next section), we'll be ready to get on to finishing
the lip-synch setup. If the model is not lining up precisely with the view, don't fret; we'll need
to do the lining up on a component basis.
You have to make sure that both the renderable skeleton chain and the reference object
skeleton chain are tracking precisely together.
If one is off, the texture in your final render will slip and swim. If you motion track
using only one set of joints, make sure you copy the animation from the Render chain to the
reference object chain or vice versa. Otherwise, just be sure to select both joints at once when
making your rotations and movements in your tracking.
Selecting both joints simultaneously should be fairly easy since the joints are right on
top of each other. Just use a marquee selection (drag the mouse over the joints to select them
both) to grab both. It seems like a good idea to use a constraint or an expression to make the
joints have the same motion and orientation automatically, but you would not want this.
Once you have the snout positioned and/or tracked to the background, you'll want to have
some measure of independent movement in the render joints, most typically in the lower and
upper jaw joints. Having them tied to the reference joints will disallow any independent
movement for your animation, and you will be unable to move the upper or lower jaw joints
of the render model to lip-synch.
We have two skeleton chains so that we can track both the reference object to the
moving texture and the renderable snout to the BG plate of the cat's head. We
don't simply assign both the renderable and the reference texture object to the
same chain because we need the ability to manipulate the renderable snout's joints
differently. We will need to animate some of the joints of the renderable snout inde-
pendently of the reference object snout. For example, we'll need to move our ren-
derable object's jaws to animate to the lip-synch, while leaving our texture reference
object's jaws unmoved, if both the renderable snout and its texture reference object
were bound to the same skeletal chain, this would be impossible.
As long as the texture reference is tracked properly, the textures will not swim on the
renderable object. As long as the track (except for the jaws' movement caused by talking) for
the renderable object is also accurate, the third track will be spot on as well, creating a seam-
less scene in the final comp. See Figure 5.36.
The track might involve more than simply rotating the joints to keep the snout in
position. It might also involve animating the clusters you've created to tweak the
geometry into place.
An F or V shape starts with an E shape. Grab the CVs of the first rows of the bottom
lip, and move them up toward the upper lip.
A T shape is also like an E shape, but it only involves moving and scaling out the top lip
CVs to bring the upper lip slightly higher. Grab the first few CV rows of the edges of the upper
lip and bring them slightly higher than the middle, making a bit of a smile. See Figure 5.37 for
an example of mouth shapes.
The file R o x _ F i n a l _ N o _ A n i m . m b in the scenes folder of the R o x _ l i p _ s y n c project on the
CD-ROM will provide you with a fully built and bound snout and skeletal chain that has
already been conformed to fit the background plate. The blend shapes are provided and set up,
though the task of actually building the mouth shapes is left up to you. It is important to make
your own mouth shapes for animation, as it will give you more control over the lip-synch.
Be sure not to delete or add any CVs or isoparms to any of the blend shapes. All
the blend shapes need to be uniform with the original renderable object in that
respect.
As a matter of habit, name all your blend shape snouts according to their letter
sound.
Select the blend shapes in the order that you would like them to appear in the Blend
Shape Editor, select the renderable snout, and choose Deform Create Blend Shape In
the Create Blend Shape Options dialog box, make sure that In-Between and Delete Targets
are turned off and that Check Topology is on, as in Figure 5.38. By leaving the In-Between
check box cleared, we're setting up the Blend Shape Editor to display a separate slider for
each mouth shape. In this way, we can combine different mouth shapes for even greater flexi-
bility. And, of course, leaving the Delete Targets check box cleared will keep the original
blend shape objects in the scene, in case you need to adjust the mouth shapes later, which
you can do. Check Topology to make sure that the renderable snout object and the blend
shape objects have the same number of isoparms and CVs.
Click the Advanced tab (see Figure 5.39), and switch the blend shape type from Default
to Parallel. Switching the blend shape type is important. If you don't do so, the renderable
head will blend right off the joint system, forcing one of us to loudly snarl at you, swing our
arms around violently, and throw our shoes at you. Nobody wants that to happen.
By selecting Parallel, you allow the deformations on this object (namely the joint sys-
tem, the blend shapes, and any clusters) to behave nicely toward one another. Otherwise, it's
a World Wrestling Federation free-for-all. Setting the deformations to Parallel will set up the
deformation order so that the blend shape deformations will occur in parallel or in tandem
with the other deformations on the object, namely the skeletal deformations we've set up for
the snout. Not setting the deformations to Parallel will create strange results when there is
animation on the skeleton and the blend shapes.
Once your blend shapes are set up, you're in business. Test one of the sliders in the
Windows Animation Editors Blend Shapes, and make sure the snout doesn't fly off the
joints. If it does, delete the blend shape and try it again, checking your settings. The most
common issue will be one of two things. First, it might be that Maya hates you and will do
everything it can to make sure your lip-synch doesn't come out right, or second, it could be
that your joint chain was not in the bind pose when you duplicated you original renderable
Hands-on Example 2: Photo-Real Facial Replacement and Lip-Synching 1 59
Figure 5.38: The Create Blend Shape Options Figure 5.39: On the Advanced Tab, be sure to
dialog box select Parallel Deformation Order.
snout for the initial blend shape. This error is fairly common, but is no less annoying because
it basically means you have to go back and redo all that again.
At this point, clean up your scene file. Make sure it's all named properly. Make a few
display layers and assign the reference object snout and its joints to one layer. Assign all the
mouth and tongue stuff into another layer. Assign the blend shapes to a layer that's hidden.
Assign the renderable snout and its joints to another layer, and so on. Be organized. The
cleaner and easier the file is to work with, the better the result and less likely you'll need a
stiff drink after the project.
"Do You Understand the Words That Are Coming Out of My Mouth?"
Load R o x _ A u d i o . w a v from the CD-ROM into your media player and listen to it. You will be
lip-synching about 30 seconds worth of audio. That's a lot of audio for one shot, but this
example exposes you to lip-synch using only one setup. When you're comfortable with what
the voice is saying and you have a good feel for how she is saying it, load the file into your
Maya setup. Now, follow these steps:
1. Set the frame rate at 30fps.
2. Choose File Import.
3. Locate your audio file. It's better to copy the file onto your hard drive, into your s o u n d s
directory in the current project if it's on a CD or removable drive.This step imports the
file into the scene.
4. RM click the Time Slider, and choose Sound audio filename from the shortcut menu
to display a cyan-colored waveform superimposed on your Time Slider, as shown in. Fig-
ure 5.40. If the waveform doesn't display, choose Windows Settings/Preferences
Preferences. In the Sound section, set Waveform Display to All, as shown in Figure 5.41.
5. RM click the Time Slider again, and choose Sound Rox_Audio to display the
Attribute Editor for the sound.
6. Set the offset to 1 instead of 0 to start the audio on frame 1 rather than frame 0. (Hon-
estly, starting at frame 0 just gives me the willies.)
7. RM click the Time Slider (boy, we're sure doing enough of that lately!), and choose Set
Range To Sound Length to set the Time Slider range to the exact length of your
audio, which should now read frame 1 to 822.
8. Use the range slider to zoom in to a more manageable time range, such as 30-50
frames at a time that correspond to what the voice is saying.
9. RM click the Time Slider again, and choose Playblast (or use your hotkey or the menu
selection). Be sure that the playblast will play in Movieplayer and not in fcheck. Win-
dows Media Player or the QuickTime player will play the audio along with the visual
playback while fcheck will not. To change from fcheck to Movieplayer, RM the Time-
line and choose Playblast Options. Click the Movieplayer radio button.
Now notice the audio in the context of your frame range. Once you playblast the scene
or even click the Play button, the audio loads into memory, which makes scrubbing the
audio back and forth rather speedy.
Again, the file R o x _ F i n a l _ N o _ A n i m . m b in the s c e n e s folder of the R o x _ l i p _ s y n c project
on the CD-ROM will bring you up to speed. All this file needs are the mouth shapes modeled
into the blend shape objects already laid out for you and the final animation.
"Do You Understand the Words That Are Coming Out of My Mouth?"
Load R o x _ A u d i o . w a v from the CD-ROM into your media player and listen to it. You will be
lip-synching about 30 seconds worth of audio. That's a lot of audio for one shot, but this
example exposes you to lip-synch using only one setup. When you're comfortable with what
the voice is saying and you have a good feel for how she is saying it, load the file into your
Maya setup. Now, follow these steps:
1. Set the frame rate at 30fps.
2. Choose File Import.
3. Locate your audio file. It's better to copy the file onto your hard drive, into your s o u n d s
directory in the current project if it's on a CD or removable drive.This step imports the
file into the scene.
4. RM click the Time Slider, and choose Sound audio filename from the shortcut menu
to display a cyan-colored waveform superimposed on your Time Slider, as shown in. Fig-
ure 5.40. If the waveform doesn't display, choose Windows Settings/Preferences
Preferences. In the Sound section, set Waveform Display to All, as shown in Figure 5.41.
5. RM click the Time Slider again, and choose Sound Rox_Audio to display the
Attribute Editor for the sound.
6. Set the offset to 1 instead of 0 to start the audio on frame 1 rather than frame 0. (Hon-
estly, starting at frame 0 just gives me the willies.)
7. RM click the Time Slider (boy, we're sure doing enough of that lately!), and choose Set
Range To Sound Length to set the Time Slider range to the exact length of your
audio, which should now read frame 1 to 822.
8. Use the range slider to zoom in to a more manageable time range, such as 30-50
frames at a time that correspond to what the voice is saying.
9. RM click the Time Slider again, and choose Playblast (or use your hotkey or the menu
selection). Be sure that the playblast will play in Movieplayer and not in fcheck. Win-
dows Media Player or the QuickTime player will play the audio along with the visual
playback while fcheck will not. To change from fcheck to Movieplayer, RM the Time-
line and choose Playblast Options. Click the Movieplayer radio button.
Now notice the audio in the context of your frame range. Once you playblast the scene
or even click the Play button, the audio loads into memory, which makes scrubbing the
audio back and forth rather speedy.
Again, the file R o x _ F i n a l _ N o _ A n i m . m b in the s c e n e s folder of the R o x _ l i p _ s y n c project
on the CD-ROM will bring you up to speed. All this file needs are the mouth shapes modeled
into the blend shape objects already laid out for you and the final animation.
Figure 5.41: Adjust the preferences to display the wave- Figure 5.42: The Blend Shape dialog box with Rox's
form. mouth shapes
When you get to the end of the current time segment, move to the next segments until
you're done with the overall jaw movements. Playblast and make sure everything looks okay.
Now you're ready for the lip deformations through the blend shapes.
Mouth Deformations
Once you have the jaw movements timed correctly, the next step is to use the lip deforma-
tions to make the different mouth shapes. Choose Window Animation Editors Blend
Shape Editor to open the Blend Shape dialog box, as in Figure 5.42.
Now comes the fun part. Having spent oodles of time setting up and crafting our blend
shapes, we're ready to spend oodles of time pushing some sliders and keyframes around.
Basically, and this is about all there is to say about this stage, use the Blend Shape dialog box
to match the mouth shape on the model to the sounds in the audio. The more you do it, the
better and faster you get, but here are some guidelines:
• Use a combination of mouth blend shapes to achieve a particular sound, phoneme, or
vowel.
• Mouth out the phonemes yourself slowly and pay attention to how your mouth forms
before, during, and after the word or sound. Don't worry about looking crazy at your
desk as you talk to your keyboard. We all do it.
• Mouth shapes will differ between two instances of the same exact word depending on
the context in which they are spoken and which words follow or precede them.
• Try not to go from one lip shape to another in less than two frames or more than four
frames. There are always exceptions to this rule, but be careful. You don't want the lips
chattering too fast or moving in slow motion.
• If a sound is being held for a while in the audio file, make the mouth shape and hold
that shape, but add some slight movement in the lips and/or jaw. Never allow yourself
to keyframe between two jaw movements/blend shapes in more than four frames,
though. That would make it look as if the lip-synch is animated in slow motion.
• Run a first pass for each 30-50 frame section, playblast it with the audio on, and then
go back and adjust the joint rotations and blend shape keyframes to finesse your work.
162 C H A P T E R 5 • Lip-Synching Real-World Projects
Final Touches
You may have noticed that Rox doesn't have any whiskers.
Well, we removed them. Before you call the ASPCA on us, we
painted them out of the background plate in Photoshop. It
would have looked mighty weird doing a mouth replacement
without having painted her whiskers out. When the cat begins
to talk and the snout deforms, the whiskers won't move, and
that would look wrong. Now, however, we need to replace the
whiskers with CG whiskers. You can do so with Paint Effects or
geometry that is attached to the snout model, among other
ways. We'll leave that up to you.
What ultimately separates a good job from a bad job is
the level of detail. By combining multiple layers of simplicity,
you can create a whole of complexity that is elegant and profes-
Figure 5.43: Turning off the filter type will help sional looking. This example has taken us through essentially
sharpen the texture file when it's rendered. only one-third to one-half the battle in mouth replacement, the
Summary 163
setup procedure. The rest of the battle lies in the animation that flows from your heart to
your mouse to the screen. Sounds all New Age, but it's true. Only about the third or fourth
time doing this example will you start becoming adept, so don't give up. Keep at it as long as
it's fun or it pays. Figure 5.44 shows a still from the final animation ( R o x _ T a l k s . m o v ) , which
is available on the CD-ROM.
Summary
Although lip-synching isn't the "sexiest" animation job, it is extremely visible and difficult to
get just right. Fortunately, Maya provides a tool set that makes this job much quicker and
more accurate, especially for longer or series projects in which large amounts of dialog are
spoken. Lip-synching is time intensive and exacting, but does offer a number of creative
choices and is rewarding when done correctly. Whether done as part of a stylized animation,
as in our first hands-on example, or to make a "real" animal speak, as in the second, lip-
synching is an effect best judged by the way it disappears from our consciousness after a few
moments. Like so much in life, if lip-synching is done right, it looks easy and natural, belying
the vast work that goes into creating it.
Creating Crowd Scenes from a
Small Number of Base Models
Emanuele D'Arrigo
directors, producers, and artists themselves, while constantly fighting budget and time con-
straints, at least have complete freedom of visual choice.
Natural forces such as water in The Perfect Storm and asteroids in Armageddon and
Deep Impact unleashed hell on the screen. Dinosaurs with jiggly muscles and wrinkled, col-
orful skin have been brought to life on live-action background plates in a seamless integra-
tion, as in Jurassic Park and Disney's Dinosaurs. With realistic 3D human characters
deformed in The Mask, violently crashing cars in Meet Joe Black, and acting in a fully 3D
environment in Final Fantasy, the ultimate challenge has been met and defeated. Or has it?
Of course not. Many brand-new challenges stemmed at one point or another from the
brilliant minds of many individuals and teams involved in the industry, always trying to re-
create again and again that sense of amazement from the early days of computer graphics in
themselves and in the audience.
In some cases, it's "just" a matter of new algorithms being discovered and implemented
in the software: inverse kinematics, displacement, radiosity, and other new programs are
thrown into the battlefield, allowing unprecedented feats in modeling detail, animation style,
and rendering realism. In other cases, however, it's a matter of mass: the collective size and
complexity of the sheer number of objects in a scene simply overtaxes computers.
Heroes
A crowd is not just a group of characters. A crowd is a character in and of itself. A crowd
acts by itself as a single entity, sometimes as the main character of the shot and sometimes as
a background element.
For example, let's consider a shot that portrays a stadium filled with people or furry
pink elephants with bright yellow dots—never limit your creativity. Let's imagine that two
characters are talking to each other in the foreground, well in the frame. Unless there are
special needs, they will probably get all the attention, while the audience swiftly discards the
crowd as a background element, even if animated. Then, let's imagine a second shot, with no
elements other than a wide view of the crowd in the stadium. In this second case, the audi-
ence's attention is focused on the crowd as a whole.
The process of directing the audience's attention doesn't just happen, though. You
must carefully and intentionally mold the process for effective storytelling. Or at the very
least, you must be aware of where the attention of the audience is and is not.
170 C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
For this purpose, two dramatically important perception laws help us understand
where our eyes—and minds—tend to go when we look at something:
• In a still scene, the human eye focuses on something that moves.
• In a moving scene, the human eye focuses on something that is still.
These laws, evolutionarily hardwired in our brains (probably from ancient times, when
something moving was either something edible or something that considered you edible) give
a good indication of what our eye is going to do. For example, if a shot shows a group of
running characters, and one of those characters stops, our attention is immediately drawn to
that character. On the other hand, if a shot shows a close-up of 50 pink elephants in a foot-
ball stadium, and one suddenly stands up, screams, and gestures to the referee, you can bet
that our attention is immediately drawn to that particular pink elephant.
Although these laws apply to the staging of any scene, including live-action films and
theatrical presentations, they are effectively inversely important for a crowd: to focus audi-
ence attention on a single detail or individual is exactly what you don't want when animat-
ing and creating a group of characters. The focus is not supposed to be on a detail, on one
individual doing something different from the others. If the story actually requires such a dis-
tinction, that character or those characters are not "technically" part of the crowd; instead,
they are "heroes," and you must handle them separately, most likely manually, with the stan-
dard tools offered by the software.
This does not mean, however, that a crowd doesn't need details. Au contraire! Model-
ing, animation, and rendering can and should add details to the characters in the crowd.
Our pink elephants, for instance, could all wear the tiny bright yellow hat of the team they're
supporting at the stadium. Or they could all flap their ears continuously, to overcome the
intense heat of the hyper technological stadium unfortunately constructed in the middle of
the jungle. But none of them should visually stand out, say, with a bright green hat or non-
flapping ears, unless there's a reason for them to be the focus. You must visually balance
details in an overall uniform fashion, characterizing the crowd, not the individuals that
make it up.
As you can see in this simple example, in the creation phase we must be really careful
with the apparently small numbers of features of the base character. We must try to limit
model and setup to only what's necessary and unavoidable, and we must flag some charac-
teristics as removable by the people downstream in the production pipeline.
Additional utilities for the animators are six IK handles—two in each ankle, two at the
tip of each foot, and two more in the wrists. Two nulls, or locators (dark green), control the
pole vector of each arm's two-bone chain.
174 C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
Now, let's look at the shader tree. It too is simple: eight nodes in all. Starting from the
top node, a Lambert node, the tree splits in surface colors and a fake rim light. The rim light
is given by a Sampler Info node whose facing ratio output is piped in a black to red ramp.
Although this might sound complicated, it is actually nothing more than a bare X-ray shader,
the only difference being that its output is directed to the incandescent input of the Lambert
node instead of to the transparency. In the ramp itself, moving the colorEntry[2] up and
down changes the overall thickness of the red rim, faking a back light behind the character.
The two surface colors—yellow and blue—are provided by two monochromatic ramps
mixed together by a layered texture node. A third input to this node is a texture file, painted
on the 3D model with Maya-integrated painting tools and then saved as a 1024 x 1024 . t g a
picture ( b a s e _ h u m a n o i d S h a p e . t g a on the CD-ROM). The yellow color is placed on top of
the blue background color using this file as a mask and generating the effect of a yellow pat-
tern on the blue skin of the character. Figure 6.3 shows the three colors of a Zoid.
The character is encoded as a Maya binary (*. m b ) file. I have a rule of thumb about
this, not unusual in the 3D industry, and that's why I'm mentioning this apparently trivial
topic. Maya binary files load faster than Maya ASCII files. The Z o i d _ b a s e . m b file loads two
to three times faster than the same file saved as Z o i d _ b a s e . m a . Saving your finished model
files in Maya binary is a good idea because these files usually contain thousands of coordi-
nates for vertex positions and numbers for topology information. You can then switch to
Maya ASCII when you put together your entire scene with all models, especially when the
scenes become complex and have
references in them.
As you know, you can read
and edit an ASCII file using a sim-
ple text editor. This can help in at
least two situations. For example,
a scene has 50 references to model
files. If you have an ASCII file, you
can switch from low-resolution to
higher-resolution models without
opening the scene. Simply use the
Replace function available in any
decent text editor, and you can
start right away with the high-
resolution render. (See Figure 6.4.)
Here's another example: Some-
thing goes horribly wrong, and
your extremely precious scene is
corrupted. You can open an ASCII
file and debug it manually to find
Figure 6.3: The three colors characterizing a Zoid and correct the line that generated
Deploying Three Task Forces 1 75
Figure 6.4: The first lines in a *. ma file include the path and filenames of the referenced models.
Replacing them with the help of a text editor updates the scene without the need to open it in Maya,
a time-consuming operation when the crowd of characters is in place.
the loading error. In the worst case, you will lose only the corrupted lines instead of the last
two hours of work.
This script admittedly doesn't have a fantastic GUI. I prefer to keep interfaces as small as
possible and focus on writing a robust core procedure, the one actually doing something to
the scene, in this case Z o i c L h u e V a r ( ). Once the script is working correctly and is properly
debugged, there is no harm going back to the GUI code and improving it or having a GUI
programmer take care of it.
The command line becomes important when things get slow. After the testing is done and
all the useful input values and ranges are known, you can use the interactive GUI version of the
script to modify the crowd and all the variations of its individuals. But this can take a while if, for
example, you have 1000 Zoids in a scene. Even if the system can handle \ 000 Zoids, it would
surely take its time doing so. Suppose that you calculate the runtime at about 6 hours and that it
should finish about 2 A.M.. And you need to run another script after the first one, which will also
take some time. You don't want to be there at 2 A.M. to run the second script, do you?
If the scripts you write are GUI based only, you have no choice but to wait, miss your
son's evening basketball game, go home about 3 A.M., and face a spouse who's not exactly
happy. Solution: the script must always have a GUI-free "core" procedure, ready to run from
the Maya command line, the Maya Script Editor, or from another script. With this method,
you can stack two or more scripts to run one after the other, even more than once, without
your assistance.
The last issue I want to mention about the architecture of these scripts is an important
argument regarding anything meant to run for a long time, a usual occurrence with crowds.
At least in the interactive, GUI version, a script must generate goodly amount of runtime
output (maybe with different "verbose" levels) and statistics about what it is doing and how
fast it is doing it. With a crowd scene, a script can easily run for ten minutes, one hour, or all
night. Therefore, I usually generate two types of output:
• A line of text for each character processed
• A report of the start and end times
In the main loop of the script, I usually generate a line of text for each character
processed, for example, "Processing Zoid27... done! - (27 of50)". The frequency of this out-
put can range from less than a second to a few minutes. If the frequency at which a new line
of output is printed to the screen is lower than this, generate additional lines in key points of
the code, most notably after the instructions that require the longest time—for example,
loading and unloading of big reference files. If the frequency is higher, and the output is liter-
ally racing down the screen, it's wise to generate output only every 10, 100, or 1000 loops or
every 5, 10, or 25 percent of the total number of loops.
Furthermore, at the beginning of the script, I usually set a variable with the start time.
When the script finishes, the start time and the end time are reported together so that I can
check when and for how long the script has run.
Here's what can happen if you don't generate such output. You run the script overnight,
and during dailies, the director or the supervisor asks you to make a change. "No problem,"
you reply, but when they ask how much time it will take and if the executive producer can see
it before lunch, you have to answer evasively because the script that started at 8 P.M. could
have finished at 9 p.m. or at 7 a.m. the next morning. Generating output prevents this situation.
Listing 6.1 is an empty template, the basic framework on which I usually structure my
scripts. Easily recognizable are the two main procedures, the user-input handling and the
output-generating lines of code.
/ Listing 6.1: An Empty Template
// t e m p l a t e S c r i p t . m e l - v l . 0 0 . 0 0
// by Manu3d - © 2002
Deploying Three Task Forces 177
/ / h a n d l i n g the s e l e c t e d o b j e c t s
string $obj;
s t r i n g $ o b j L i s t [ ] = ' l s - s l -o - s n ' ;
int $ o b j N b = ~ s i z e ( $ o b j L i s t ) ' ;
string $time1;
i f($verbose)
{
p r i n t (" - - - - \ n")
print("Zoid_sizeVar.mel output starts here.\n")
print(" - - - - \ n") :
$timel = "system("time /T")';
/ / reporting f i n a l s t a t s
if($verbose)
{
s t r i n g $time2 = 'system("time /T")';
print("--- - - \ n") ;
p r i n t ( " M a i n loop begun: " + $timel);
p r i n t ( " M a i n l o o p ended: " + $time2);
print(" - - - - \ n") ;
print("coreProcedure.mel output ends here.\n");
print("-- - - \ n") ;
}
return 0;
continues
1 78 C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
g l o b a l p r o c int c o r e P r o c e d u r e _ G U I ( )
i{
// creating the user interface with default inputs.
string $result;
$result = 'promptDialog
-t "coreProcedure vl.00.00"
- m " arg1 arg2 argN (or \"he1p\")"
-tx "1.0 10.0 100.0"
-b "OK" -b "Cancel"
-db "OK" -cb "Cancel"' ;
/ / h a n d l i n g the u s e r inputs
string $inputLine;
if ( $ r e s u l t == " O K " )
{
$inputLine = 'promptDialog - q ' ;
} else {
w a r n i n g ( " T h e u s e r c a n c e l l e d the a c t i o n . \ n " ) ;
r e t u r n 1;
string $buffer[];
int $bufSize = 'tokenize $ i n p u t L i n e " " $buffer~
if($bufSize < 1)
error("At least \"help\" expected.\n");
// h a n d l i n g the h e l p request
if($buffer[0] == " h e l p " )
(
print("Help - b 1 a b 1 a b 1 a \ n " ) ;
print("Help - blablabla\n");
return 0;
Creating Variety
Why use two monochromatic ramps, a black-and-white mask file, and a layered texture if
painting the yellow spots directly on a blue background would lead to a single file and a
single node? If we could get by with blue and yellow characters only, that would be a good,
Deploying Three Task Forces 1 79
clean method. But if we want procedural chromatic control over our characters, splitting the
colors in three ramps (rim included) is crucial.
The goal of the script Z o i d _ h u e V a r . m e l (on the CD-ROM) is to change one of the most
visible features of our characters: their colors. Color variation is actually one of the best ways
to create any crowd, even if the geometry of the models is identical. Differences in color, from
a distance, is enough to turn a CGI-looking, clearly duplicated bunch of identical characters
into a realistically variegated group. There's a glitch though. Most of the time selecting a ran-
domcolortoreplace an existing color is not enough. I used the scene file Z o i d _ 1 6 a r r _ h u e . m a
(on the CD-ROM) to test the script Z o i d _ h u e V a r . m e l , and I invite you to do the same. Sixteen
Zoids are comfortably placed in a 4 x 4 array, ready to have their color changed by the script.
Running a Script
To run any script In this chapter, follow these steps:
1. Type the two following lines in Maya Script Editor, opportunely customized for your needs:
source "myPath/myScript.mel";
myProcGUI;
You want to run the myProcGUI procedure from the file m y S c r i p t . m e l .
2. MM+select both lines, and drag and drop them on a shelf. A new MEL icon becomes available.
3. Simply click the icon to run the script. A small prompt window will pop-up requesting your inputs.
6.6: Wild co/or variations—Input values: rel 180 all Figure 6.7: Subtler color variations—Input values: rel 60 all
You can test this script with the Maya file Z o i d _ 1 6 a r r _ s i z e . m a on the CD-ROM.
Most of the parameters for these changes are internal to the script; there are too many
of them for the simple interface I had the time to develop. The only input the user needs to
provide is the minScale and maxScale, loosely defining the final height range of the Zoids. A
182 C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
plethora of things happen in the script though. First, the length of the Zoid skeleton is modi-
fied. He gets taller or shorter, the ratio between the length of his legs and the height of his
torso are modified, and the length of his arms is changed. Although some variations are
allowed, a general rule hard-coded in the script is that the length of the limbs is proportional
to the height scaling factor of the Zoid. This rule guarantees that a tall Zoid will have
decently long arms and that a small Zoid will have shorter arms. For instance, the formula
behind the scaling of the arms is the following simple one from line 60 of the script:
$ a r m s L e n g t h = ( ' r a n d 0 . 9 1.2' * $ r n d H e i g h t ) ;
Here $ r n d H e i g h t is the overall random height scaling factor. As you can see, a random
number is involved, to make sure that two similarly tall Zoids won't necessarily have the
same arm lengths, but the final number is also connected to the Y scaling of the character. Y
scaling, however, is not the simple scaling value of the Zoid's top transform node. You can
scale the character only by changing the scale attributes of its joints, and the overall height is
actually a composite of the length scaling of the legs and torso joints. This makes the object
space uniform from Zoid to Zoid, allowing for easier debugging of the script's simple math
in case of weird results. Additionally and more important, a nonproportional scaling of the
character's top node, say «1.0, 2.0, 2.0», would lead to warping of the joints and the over-
lying skin, an effect not immediately noticeable if the character is in the neutral position, but
easy to spot as soon as there's a left/right or forward/backward tilt of head and torso.
In the phase I just described, legs and arm joints are scaled proportionally on all their
axes. This process is quite similar to scaling the top node, which generates a proportionate
character, simply bigger or smaller. At this point, the script forks in three possible flows,
through the long If block between lines 109 and 197.
In 30 percent of the cases, nothing happens. The Zoid you now have is what you'll see in
the scene, and no further changes
are made to its bone structure.
In 35 percent of the cases,
you'll get a strong Zoid. The thick-
ness of legs, torso, and arms is
increased, basically through simple
change on the Scale Y and Scale Z
attribute of each joint. The X
scaling, or length of the joint, is
untouched. The result is usually a
rather muscular Zoid.
In the remaining 35 percent
of the cases, the thickness of the
limbs and torso do change, just in
the opposite direction, generating
thin, skinny Zoids.
These percentages and scaling
values embedded in the code are a
matter of personal taste and testing.
They do not have a particular logic
Figure 6.8: An example of the variations generated by the other than that the arms shouldn't
scnipt Z o i d _ s i z e V a r . m e l on the scene Z o i d _ 1 6 a r r get long enough to touch the
_ s i z e . m a — l n p u t v a l u e s 0.8 and 1.25 ground level. How much distortion
Deploying Three Task Forces 183
the characters could tolerate without badly affecting the overall figure is mainly a matter of
stretching and compressing joints for each of them. The whole script is not much more than a
little servant patiently doing what a human would do if they had to make variations to hun-
dreds of characters and could survive the surely life-threatening boredom.
Finally, the script changes the position of one object, the b u m _ a r r i v a l locator. This
object is a weighted constraint dictating where the spineRoot joint, the root of the entire
character's skeleton, is to be positioned when the character is fully standing. Its Y value (see
Figure 6.9) depends on the length of the legs if you don't want the long legs of tall Zoids
bent almost 90 degrees and short Zoids floating a good 20 percent of their height from the
ground.
With the colors and the shapes modified, we're done with the static characteristics of a
model. Now let's see what kind of tricks the animation team needs to tackle the problem of
the same animation used for many similar but never identical characters.
Figure 6.9: The effects of the unadjusted base animation Figure 6.10: Breaking the connection between the anima-
on Zoids with modified proportions. Whereas the base tion curves and the spineRoot joint (yellow arrows) and
model would stand correctly, a tall Zoid has its legs bent, creating new ones (blue arrows) from the same curves to
and a short one seems to be floating well above ground thelocator b u m _ _ a n i m
level.
With the help of the Hypergraph, let's break the connection between this joint and its
two animation curves (see Figure 6.10): select the two blue arrows connecting them to the
joint, and simply press the Delete key.
Now we can connect those animation curves to a newly created locator, which we will
call b u m _ a n i m , and parent under the Zoid top node. In this way, the locator is in the same
object space as the spineRoot joint.
A simple MM drag and drop from each curve to the locator node opens the Connec-
tion Editor, in which the output of each animation node needs to be connected to the proper
translation attribute. Finally, we point-constrain the spineRoot joint to the locator, leaving
the weight of the constraint to its default value of 1.0.
If you did everything properly, you shouldn't see any difference in the resulting anima-
tion. The spineRoot joint is still following the same trajectory in space, since the original ani-
mation curves are still involved. The only difference is that now the animation curves don't
drive the joint directly, but through the point-constraint to the locator.
Now let's create a second locator, b u m _ a r r i v a l , also a child of the character's top node.
The spineRoot joint has to be constrained to this locator too, but let's change the weight of
the new constraint to 0.0. Notice that this locator won't be animated. Now let's alter our
script Z o i d _ s i z e V a r . m e l , inserting what is currently line 91, the line responsible for modify-
ing the Y coordinate of this second locator to match the height of the hips when the legs are
extended:
setAttr ($suffix + ":bum_arrival.ty") (0.511 * $legsLength * 0 . 9 7 ) ;
Finally, let's keyframe the two weights so that when the character is fully standing, the
spineRoot joint is fully and exclusively constrained to the object b u m _ a r r i v a l and so that when
the character is seated, it's fully and exclusively constrained to the object b u m _ a n i m — o f course
with smooth transition between these two states (see the animation curves in Figure 6.11).
Deploying Three Task Forces 185
guarantee the full extension of the arm. Also in this case, only one locator is holding the
translation animation; the other two merely mark a position in space.
In each case, only one locator is actually animated, but this is not a problem because
the character stays in the arrival position for only one frame. If such a position had to be
held for a while, a simple jitter of the arrival locators would prevent a dead-on-spot look
typical of perfectly still 3D objects. With a character about one unit tall, as our Zoids are, a
0.02—0.05 units jitter would be enough.
Finally, I created a character set for the animated Zoid and encased its animation
curves in two separate animation clips, one for the upward movement, the other for the
downward movement. Although the choice of a segmented animation turned out not to be
useful in the scene produced for this chapter, you might use the concept in other situations, if
you want to combine more clips in different chains of action. In the case described here, the
idea was to hold the standing position for a short time, maybe have some jittering on the
raised arms to avoid the characteristic dead-on-spot look, and then have the Zoid sit down
again. But in the end, any hold longer than a few frames widened the wave pattern made by
the crowd and resulted in an unnatural look that I simply decided to avoid.
Figure 6.13: The indoor basketball court, a few hours before the arrival of the audience
Additionally, you can select one or more objects, usually locators, before running the
script. If you don't select an object, a locator is created at runtime, and the row of characters
follows the positive X axis, starting from the world origin. If you select one or more objects,
one row for each selected object is created. Each row is of the user-defined length and is
placed starting where each object is, consistent with its orientation and scaling.
The ability to place Zoids based on selected locators is an extremely useful feature
when placing a final large crowd. In the scene s p o r t P a l a c e . m a on the CD-ROM, 18 locators
are pre-positioned, waiting to be used by the r o w C r e a t o r . m e l script. Once all systems are go
for the creation of the final scene, I select the locators of the side sectors of the bleacher
(length 5u), let the script run smoothly for 15 minutes, let it run a second time to take care of
the middle sector (length 6u), and voila: our empty scene is automagically populated by 180
colorful Zoids (see Figures 6.17 and 6.18).
Here I should mention a couple of significant details. First, the script doesn't actually
duplicate the models, but creates references of the file Z o i d _ a n i m a t e d _ g o o d . m b , which con-
tains an animated version of the Zoid. This keeps the file size extremely small, in this case
4MB, but of course doesn't save RAM during render time, when the 3MB references loaded
180 times push the amount of used memory well beyond 500MB. Additionally, if we have to
change the animation, it's enough to change the file to which all the references point instead
of editing the heavy crowd scene itself.
Second, as I mentioned earlier, the scene is intentionally saved as a Maya ASCII file. On
a dual 800Mhz PIII with lGB of RAM, the scene with all the characters in it takes about 10
long minutes to load, something you don't really want to do too often. But you might have
to use a new animation, leaving the old one, for compatibility reasons with other scenes in
Deploying Three Task Forces 189
Figure 6.16: Each prepositioned locator will be parent to a Figure 6.17: A sky-cam view of the bleacher filled with
row of characters. Zoids
production, with the original name and in its directory. If you have a Maya binary file, the
only way to deal with this is to open the scene, take a 10-minute break, and then run a small
script to change the filename to which the 180 references point.
Using references and a Maya ASCII file, I can instead modify the base animation scene,
rename it so that it doesn't overwrite previous versions, and then use the Replace function of
my favorite text editor and quickly update my crowd scene.
But, hold on! The scene is not quite finished! All references are statically different, their
colors and proportions vary, but all their animations are perfectly in synch: they all stand up
at the same frame and then sit back down in a perfectly synchronized collective movement.
We'll now use the Z o i d _ c r o w d A n i m a t o r . m e l script (on the CD-ROM) to finalize the
scene by varying, randomly but logically, the animation of each character.
Together with the few props of the basketball court, one hidden object is buried in the
scene s p o r t P a l a c e . m a : " p r e v i z G r i d " . Hierarchically located under the p r e v i z _ g r p node,
the hidden object is actually a simple NURBS grid with about 200 patches in length and 6 in
width. Although 200 is an arbitrary large number, loosely related to the possible random
locations of the Zoids, the 6 patches in width are related to the precise number of steps in the
seats. Some of the CVs of this linear NURBS surface will be a primitive form of dynamic
memory, holding the status tag of a precise Zoid.
A status tag is the name or the Index of the action that the character is currently
pursuing, usually mirrored by a specific animation in the animation library.
In our case, only two status tags are necessary: stand up and be seated. The script's first
task is to find the closest CV to each Zoid and store its UV coordinates internally, connected
with the character name, which is usually in the form Z o i d # # # : Z o i d ; ### is the reference
number. The script then runs frame by frame through the entire timeline to check if and
190 C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
Figure 6.18: The lattice responsible for the Y motion of the grid CVs. The clos-
est CV to a character defines, with its motion, when that character stands up
and sits down.
when any of these assigned CVs changes its Y value. This motion is created through a simple
lattice deformer in the same group with the grid, running over it and raising the CVs like a
wave (see Figure 6.18).
When a change in the height of a CV is detected, the frame number is again stored
along with the character name and then considered a time-marker of the character switching
from seated to standing or vice-versa.
Once this pre-processing has finished (in the 180-character version of the scene, 7 to 30
seconds per frame are necessary for this phase), the script flows through the third and last
part, where all the information retrieved until now is mixed with a pair of user-provided
inputs: the enthusiasm and the reactivity.
These two extra attributes (silently added by the script to the character's top node if not
previously available) have their values randomly chosen in the ranges requested by the user.
The enthusiasm defines how much faster (or slower) the character executes its actions com-
pared with the base animation. For example, a Zoid with an enthusiasm of 2.0 will rise twice
as fast as the base animation timing, and a 0.5 value indicates a slow Zoid, taking twice the
time to complete the same action. The reactivity decides how much delay there is from the
moment the Zoid should begin the action, given by the CV movements, and the moment it
actually does so. For example, human reactivity to a stimulus is on average about 0.1 seconds,
barely two or three frames, but in a crowd scene I tend to use higher, slightly unrealistic values
because the multitude catches most of the attention and it's difficult to spot small differences.
The default values provided with this script's interface usually give good results.
Deploying Three Task Forces 191
Figure 6.19: The width of the wave is proportional to the length of the base animation dips, and the
front speed is defined by the speed of the hidden lattice.
Last-Minute Changes
At the beginning of the chapter, I talked extensively about doing things one way, but being
able to modify them. Let's see if we managed to keep that flexibility.
To start, once the Zoids are placed, you can move them, both in rows, through their
common parent or alone, grabbing each character from its top node. You can change their
colors repeatedly, using the Z o i d _ h u e V a r . m e 1 script, but also tweaking the actual shader trees
if the change is needed on a specific character. You can also change the size of a character,
manually or by running the Z o i d _ s i z e V a r . m e l script. In some cases, this script might pro-
duce some interpenetration because the spacing between two closely sitting characters is not
recalculated. If you resize manually, the arrival locators for the spineRoot joint and the arms'
IK handles might also need some adjustment.
You can adjust the animation at any time. You can edit the scene file from which the
references are sourced, and you can control the characteristics of the wave of standing Zoids
in its overall speed by increasing the speed of the lattice over the grid and individually by
lowering the enthusiasm, causing the Zoid to spend more time standing up (see Figure 6.19).
You can manually adjust the time offset and scaling of all the clips, but because the
characters are actually only references, their curves are locked, and you can't edit them. The
only workaround, other than editing the Z o i d _ a n i m a t e d _ g o o d . m b file, is to import the file
instead of referencing it.
192 C H A P T E R 6 • Creating Crowd Scenes from a Small Number of Base Models
stand up. Placing an aim constraint on each head joint to an object following the wave
front would be a good way to start implementing these actions.
Although the standing position, with legs and arms extended, is similar for all charac-
ters, the sitting position should be slightly different for each character. You could even
subtly animate this position. For example, vary the rest position of feet once in a while,
and tilt the torso left/right/forward/backward with a slow frequency.
Summary
In this chapter, I demonstrated one of the many ways you can handle a simple crowd scene, hope-
fully kick-starting or reinforcing your knowledge of the task and the problems inherent in it.
First, we modeled, animated, and textured a character. Then we loaded the character
as a reference and positioned it in the final scene. Next, we used two scripts to procedurally
randomize some of the character's most important features: colors and proportions. Finally,
we shifted and stretched procedurally the animation clips, according to the individual enthu-
siasm and reactivity value of each character.
Now it's your turn to put into practice what you've learned. Have fun and do not keep
the knowledge for yourself: share it!
Taming Chaos: Controlling
Dynamics for Use in
Plot-Driven Sequences
John Kundert-Gibbs and Robert Helms
Intomanyahigh-budgetCG(computergraphics)
production—and a number of low-budget ones—comes the need to animate
complex, natural-looking events that are either too difficult, too dangerous,
or too uncontrollable to film in real life or that need to exist in a stylized,
rather than a real, world. Although creating a live-action explosion might be
easier than creating a CG explosion, the shot might be dangerous or even
impossible to film in the real world. (It's unlikely that the French govern-
ment would let one blast the Eiffel Tower, for example!) Practical minia-
tures can be a good substitute for the real items, but they are expensive to
build, and the production crew only gets one chance to get the effect right.
Even more important, a hands-on art director might want to see a range of
different looks for the explosion, which would drive the time and cost out of
most productions' budgets. Here is where creating CG simulations of natu-
ral (or naturalistic) events comes into play: once a CG simulation is set up,
you can change any number of variables that feed into it, alter camera
angles, and even violate strict physical accuracy to help "sell" the shot.
This chapter and the next will deal with re-creating these events so
that they are believable for the audience and under enough control that you
can adjust the effect to get a specific look, even if it's not strictly accurate
compared with its real-world counterpart. Chapter 8 will deal with particle
effects akin to the explosion mentioned earlier. In this chapter, we will deal
196 C H A P T E R 7 • Taming Chaos:Controlling Dynamics for Use in Plot-Driven Sequences
with smaller numbers of larger bodies with unvarying shape (rigid bodies in Maya's parl-
ance) and discuss how to get these objects to do just what your director—or you yourself—
want them to do.
working on a solo project, you will only need to commune with yourself. In either case, it's
extremely important to thoroughly talk (or think) through the effect, down to very small
details, asking why it needs to be done for plot reasons, why it should be done in CG instead
of via another method, and how the shot will be executed. At the same time, a storyboard
artist should be rendering their ideas of what the shot sequence should look like, and these
boards should be edited and then used as a guide for the final shot. Even if you have to draw
stick figures yourself, a few hours using pencil and paper will save you time later in produc-
tion. Additionally, you can study archival footage of effects shots from previous similar pro-
ductions to help visualize the action of the shot. Referencing similar shots is always helpful
in explaining and understanding the ingredients of a current shot. If the shot needs to be
composited over live-action plates, be sure to discuss how shot information will be commu-
nicated between filming and effects people.
Finally, don't ever forget to dig in and discuss the practicalities of a shot: how many
people will be used on the shot, how much time will be allotted to the work, and how much
extra research and development money is available for equipment, reference footage, and
field trips to study the real thing. It's far better to have an honest idea of the realities of the
situation—even if it's "you have 3 weeks, two people, and $500 to do this shot"—than to go
in thinking you have many more resources than you actually do.
The planning stage is all about one thing: communication. If your team (or you) can
communicate efficiently and effectively at this stage, this work will establish open channels
of comniunication throughout the later stages of production, and that can only help in the
end. Nothing is worse than a production in which small groups of people restrict informa-
tion to themselves, and not much is better than a collection of artists all working openly
toward a creative goal.
as close as you can using what Maya gives you out of the box (which is pretty darned good
anyway!) or perhaps write a few expressions or MEL scripts rather than full-blown plug-ins.
In the beginning of the R&D stage, it is a good idea to break a simulation into one or
more simple elements. By concentrating on one element of a simulation (a single ball or one
shard), you can really dial in just the right settings to get that element to behave itself in an
environment that allows speedy interaction because the simulation is simple and thus easy
for Maya to calculate. Because the situation is simplified, you'll also get a better understand-
ing of how adjusting individual settings affects the behavior of the system, which will help
you later in the process.
Once you have the individual elements working well, begin "stacking" them together,
one after another. As you add each new element, you will invariably run into new problems
(or should we say "challenges"?) as the elements interact in unexpected ways. This stage is
probably the most difficult, time consuming, and frustrating of the entire R&D cycle—even
of the process as a whole—so be prepared to spend a good deal of your time "budget" on
this stage. It can take many a late night and weekend workday to get the various parts of a
simulation working in harmony and in a fashion so that you at least think you understand
enough about the system to make intelligent guesses as to how the system will behave when
you adjust the settings.
As you finally get things working together, you can start running full-fledged test simu-
lations to see if what you've built will actually stand up to the requirements of the shot. If not,
it's back to the drawing board to figure out what went wrong. If you're lucky, a few small
changes will fix the problem. In the worst case, you've gone about creating this simulation the
wrong way entirely, and you need to go back to the early stages of the R&D cycle and come
up with a different method for creating the simulation. Although we'd like to say this latter
case rarely happens, it's actually rather common, so be sure to budget in a safety margin in
case things go hopelessly wrong just when you think you've got everything working.
If and when you do get all the elements working together, it's time to move on to the
real thing and try to render some usable frames!
proportions when actual scene data is used! It's not a bad idea to keep a few—or few hun-
dred—extra CPUs around to help with the chore. At this stage, you will need to put your
real models into the simulation (if you haven't already), and you will need to place all the
elements needed for the simulation into one scene. Obviously with large numbers of more
complex models, interactivity and simulation speed will be slow indeed, so the better you
have prepared in the R&D phase, the faster you will be able to produce the final shot. Gen-
erally, the fewer times you have to run the simulation in it's final form to get it right, the bet-
ter, as these simulations may take up to several days to compute.
One very good way to speed things up at this stage is to place groups of rigid bodies in
their own dynamics layers. In Maya, you do this by either creating multiple Rigid Body
Solvers for the separate elements to live in or by placing rigid bodies on separate collision
layers. When a group of rigid bodies is in a separate layer, none of its constituents will "see"
rigid bodies in other layers, and thus Maya doesn't have to do as many collision calculations,
which can speed things up a great deal. Obviously if you have a simulation in which all the
elements have to interact (pins being struck by a bowling ball, for example), you can't use
this trick, but if your scene consists of multiple groups of objects that don't interact with
other groups, this trick can save you a great deal of time during final simulation work.
Another tip to speed up production work is to cache or even bake your simulation data
prior to rendering. Caching render data saves each frame's dynamics state to a file that can
be accessed later at much faster speed than the original calculations took. Baking a simula-
tion actually turns the dynamics into keyframe data, making the results extremely portable
and allowing for fine adjustments to the actual keyframes to tweak a shot "just so." Also, if
you are having problems with different machines producing different simulations (see the
earlier sidebar), or if you want to use multiple machines for rendering, allowing one com-
puter to create the simulation and then caching or baking it will ensure that the simulation is
correct and available to all rendering CPUs simultaneously.
Maya's renderer(or any other we know of) cannot efficiently render dynamics simula-
tion data on multiple CPUs (that is, distributed rendering). Because each frame of a
simulation depends for input data on the frame that came before, each separate CPU
has to "run up" the simulation through every frame that precedes the one it has been
told to render. Thus, if one CPU needs to render frame 2000 of a simulation, it will
need to calculate the previous 2000 frames of the simulation before it can even begin
rendering that frame. Obviously, this is a huge waste of computing resources. Addi-
tionally, each machine may in fact calculate the simulation differently from the others,
so the resultant frames may be useless when you finally do get them! Using either the
baked simulation or caching methods described here will resolve the problem.
Figure 7.1: Glass shards stuck in a rough dinosaur model—obviously it's time to rework the simulation.
terribly wrong somewhere in the production pipeline, and it's time to go back and see what
happened (see Figure 7.1).
If, on the other hand, the problems are small, such as the sharp edge of an object pierc-
ing another for a couple of frames, a background plate showing through a rendered object,
or specular highlights washing out in places, don't panic. There's no need to go back and
expend the time and energy to get these details right in the original simulation and renders.
You can use a range of postproduction tools to "cheat" your way to the perfect look.
If your problem area really is just a frame or two, extracting those frames into a pro-
gram such as Photoshop and cloning or airbrushing away the problem spots is an efficient
solution. If the problems are over more frames or are global to the shot itself (such as poor
color matching with the background plate), any number of compositing packages—from
After Effects to Tremor to proprietary software—have all sorts of matting, color correction,
and other tools that you can use to get rid of these nasty problems. If you thought ahead and
rendered your scene elements in separate layers, you can even adjust individual elements of
your shot to put that extra bit of polish on it.
In general, when a problem gets below a certain threshold, our mantra is always "fix it
in post!" There's really no reason to spend dozens or hundreds of hours to get things
absolutely perfect in your renders if the problem can be fixed in minutes with a compositing
package. After all, it's what your audience sees that counts, not how you got there!
Now that we've covered the basic pipeline of how to create a dynamic simulation shot,
we'll present two example shots that we recently worked on in order to give you a more
Working Example 1: A Quasi-Deterministic Pool Break 201
hands-on feel for how to put all this theory into practice. First, we'll go over how to get 16
pool balls to end up where they need to after being broken by the cue ball, and then we'll
discuss how to create convincing shattering glass effects to be composited over a live-action
background plate.
The surface of a pool table is in a 2:1 ratio of length to depth. Thus, if you build
your table 80 units long, it should be 40 units deep for accurate reproduction of
the way a real pool table works. See r i g i d T a b l e S t a r t . m b on the CD-ROM for
an example.
Working Example 1: A Quasi-Deterministic Pool Break 203
Figure 7.3: A pool table suitable for texturing, but too complex for rigid body collisions
Notice that, as shown in Figure 7.5, the hexagonal polygon pockets do not match the
higher-resolution rounded shape of the rendered pockets. On a large scale, however, this dis-
crepancy is small and therefore unlikely to be seen. We thus decide that the advantage of
faster simulation times outweighs the slight inaccuracy in collisions with pockets. If it ever
becomes obvious that the collisions are not correct, we can go back and adjust the number of
polygons in these cylinders (using the Split Polygon or Smooth tools) to add more detail.
lf you are following along with this example and building a rigid body collision
surface, please read on before you construct your surface. This iteration of the
table's shape has some deficiencies that need to be corrected.
When we get to actually sinking the pool balls in these pockets, the rigid body simu-
lation will break down as the normals of the cylinder are pointing outward rather
than inward, (if normals don't point toward collisions rather than away from them,
problems ensue.) To resolve this problem, you can first display the cylinders' nor-
mals by opening the Mesh Component Display section for the pCylinderShape in
the Attribute Editor and turning on the Display Normals option. Then choose Edit
Polygons Normals Reverse to reverse the direction of the normals.
204 C H A P T E R 7 • Taming Chaos: Controlling Dynamics for Use in Plot-Driven Sequences
Figure 7.4: A rigid body table constructed from simple Figure 7.5: Detail showing the simplified rigid body pocket
planes and cylinders against the more detailed pocket shape that will actually
render
When we created our planes and cylinders, we made sure that none of them were actu-
ally intersecting any others. Because these objects will all be passive rigid bodies, it probably
wouldn't be a problem if they touched, but a small amount of space between objects reduces
the chances of interpenetration errors and, we think, speeds up calculations just a bit, so we
left a small amount of space between each surface and its neighbors.
Although leaving space between the walls and pockets of our rigid body surface is
probably a good idea, it is imperative that we do so between each pool ball and the surface
of the table, as well as between each ball and its neighbor. As shown in Figure 7.6, we placed
all the balls just a tiny bit above the surface of the table. If we do not separate all these active
rigid bodies, we will end up with rigid body interpenetration errors (the bane of rigid body
simulations!), and the simulation will break down immediately.
Once we have all our shapes in order, we can simply select all the table elements, and
then select the balls and create active rigid bodies (choose Soft/Rigid Bodies Create Active
Rigid Body). Then, with all the balls selected, we create a default gravity field (choose Fields
Gravity).
Now we need to fix the problem of the pool balls floating above the table. With each of
the pool balls still selected, go to the Channel Box and change the rigid body bounciness to
0, and change damping, static, and dynamic friction to 10 (the maximum allowed). Next
select the table surface and make the same changes in the Channel Box. As gravity pulls the
balls down to the table, they will stick to its surface like glue, and after some time they will
come to complete rest on the surface itself, which is what we want. Now play back the ani-
mation until all the balls are completely still (this may take several hundred frames, because
they will likely rock back and forth slightly for a while) and then stop, but don't rewind the
animation. Here is how we want the balls to be initially, so, in the Dynamics menu set,
choose Solvers Initial State Set For All Dynamic. This resets all dynamics calculations
to the current state so that when you rewind the animation, the balls will remain in their cur-
rent state. A sample scene on the CD-ROM ( r i g i d T a b l e S t a r t . m b ) contains a rigid body
table at this stage of the simulation process.
As you play back your simulation, be sure your playback rate in the Timeline Prefer-
ences is set to Free (choose Window Settings/Preferences Preferences, and then choose
Timeline from the Settings submenu). Because Maya's dynamics calculations require the state
Working Example 1: A Quasi-Deterministic Pool Break 205
Figure 7.6: Detail showing that the pool balls start off Figure 7.7: The cue ball rebounding off the surface of the
"floating" above the surface of the table table
of each previous frame, the simulation can go berserk if any are skipped (which can happen
when playback is locked to a certain number of frames per second), and you can end up with
very strange results.
Now we can actually get to work animating the balls! First, to simplify things, we
change all the balls except the cue ball to passive rigid bodies (which means they are immov-
able) and start playing with initial velocity and rotation settings to get something like an
appropriate motion out of the cue ball. While working on this initial setup, we immediately
run into a problem that will be a plague throughout the research cycle. When the pool ball
strikes the rear bumper (after ricocheting off of the pool balls), its rotational motion causes it
to "climb" the side of the bumper and fling itself into the air as it rebounds, as shown in Fig-
ure 7.7. Although an interesting effect, this is not what pool balls commonly do when rolling
around the table. (Yes, they occasionally do bounce off the table, but this is not the effect
we're after, so we need to control it.)
We will rarely present actual parameter numbers (such as dynamic friction) used
during this discussion, because the fine-tuning of each of these settings is highly
dependent on your setup. Thus, it makes more sense for us to present the strate-
gies we use, rather than the results thereof.
We decided to adjust initial velocity and rotation settings rather than create
impulses for the cue ball to help us better understand the simulation. It is difficult
enough to get a good simulation by directly plugging in speed and rotation values,
and all the more difficult when trying to keyframe different impulse values to create
initial motion. Because a pool cue striking a cue ball is almost an instantaneous
effect, we felt that using these initial velocity and rotation settings would work in the
final animation, and thus working with impulse values would needlessly complicate
our simulation work.
206 C H A P T E R 7 • Taming Chaos: Controlling Dynamics for Use in Plot-Driven Sequences
Figure 7.8: The cue ball sticking inside the back bumper, Figure 7.9: With multiple balls and higher velocities, inter-
an error caused by interpenetration oft he two rigid bodies penetration errors and balls jumping offt he table become
a pervasive problem.
To resolve this problem, we can try reducing both static and dynamic friction of the
bumpers to 0 and set gravity to a much higher amount, such as 50 instead of its default 9.8.
We also find we have to lower the friction of the cue ball and table surface to fairly low num-
bers because table friction is partially responsible for the rotational speed of the ball. We
need damping (a setting that causes an exponential falloff of motion so objects will settle
down to rest in a simulation) at a small decimal number such as 0.1 or 0.2, or else the ball
will never come to complete rest. Finally, as shown in Figure 7.8, we quickly find that we are
getting interpenetration errors when the cue ball strikes the bumpers at the high velocity it
needs to travel in order to look like a convincing pool shot. Sometimes this error is just noted
by Maya, but other times it causes the ball to "stick" inside a bumper and refuse to move
anymore—a definite breakdown of the simulation!
Because of the way Maya's dynamics engine calculates friction, bounciness, and
damping—it multiplies the settings of the two colliding objects together—you ini-
tially need to adjust settings of these attributes in tandem. In our case, this means
adjusting the static friction, say, of both pool balls and table surface to a higher or
lower value at the same time. When it is time for more refined adjustments of these
settings (that is, when the simulation is close to correct), the objects can be
adjusted separately to tweak exactly how they behave.
Undeterred by our initial problems, we move forward and reset all our pool balls to
active rigid bodies and rerun the simulation. First, we find that the pool ball's initial velocity
has to be set much higher than it had been when it was the only active rigid body, or else the
balls don't scatter well. As we increase the velocity of the cue ball, and experiment with rota-
tional speeds as well, we find that our initial problems of interpenetration errors and balls
ricocheting up into the air are, of course, multiplied by the number of balls and the higher
energy being imparted to them, as shown in Figure 7.9.
After a bit of tweaking and reducing friction to very low values, we do finally get a
break that is somewhat convincing, as shown in Figure 7.10 (an animated version, b r e a k -
T a k e l . m o v , is available on the CD). However, we still have two nasty problems. First, to keep
balls from jumping up, we had to reduce their rotational speed and thus had to reduce fric-
tion across the board. This seems fine until one looks at the animation: the balls appear to
"skate" across the surface of the table rather than rolling as they should. Second, the simula-
tion is very sensitive to initial conditions, breaking down (interpenetration errors and jump-
Working Example 1: A Quasi-Deterministic Pool Break 207
Figure 7.10: A decent run of the break simulation (see the full animation on the CD)
ing balls) if initial conditions change slightly. This second problem is especially insidious,
because we need a great deal of fine control over the simulation so that we can get the balls
to end up where they need to go.
To try to resolve these issues, we raise the friction levels (dynamic more than static,
because we want the balls to roll more freely once they are moving slowly), increase the initial
velocity of the cue ball even more to compensate for the added friction, and reduce the step
size in the rigidSolver settings window to a much smaller number, such as 0.005 or 0.001.
This last adjustment, while helping with the interpenetration errors, really slows simulation
time, because Maya has to do a great many more calculations per frame of animation.
The rigidSolver Step Size setting (available by choosing Solvers Rigid Body Solver
from the Dynamics menu set) adjusts how many times per second Maya "looks" at
the state of all its rigid bodies.The smaller this step size, the more times per second
Maya has to run rigid body calculations, resulting in more accurate collision detec-
tion at the cost of slower simulation speeds. The default step size of0.03 seconds is
slightly smaller than once per frame (0.04 seconds). Reducing the step size to
0.001 —the smallest value it can be—forces Maya to check the state of all rigid bod-
ies about 42 times per frame, which results in much slower playback.
After many more hours of tweaking numbers and watching slow playbacks, we arrive
at a second take for the break, a still of which is shown in Figure 7.11 (the full animation,
b r e a k T a k e 2 . m o v , is on the CD). We now have a fairly convincing pool ball break, but it's far
from controllable. If all we needed was a convincing break, but didn't care about where the
208 C H A P T E R 7 • Taming Chaos: Controlling Dynamics for Use in Plot-Driven Sequences
Figure 7.11: A second take at the break simulation (see the Figure 7.12: A revised version of the rigid body pool
full animation on the CD) "table"
balls ended up, we could probably stop here. Because we need more control over the simula-
tion, however, we need to keep plugging away.
After some more experimentation (if you call five hours of adjusting numbers and
swearing at the terrible and slow simulations "experimentation"!), we finally have a brain-
storm and decide to think inside the box. First, we note that the interpenetration errors do
not occur between balls, so we realize that by replacing our bumper planes with simple
polygonal cubes, which have depth as well as height, we might get rid of those nasty errors.
At the same time, we realize that by placing a frictionless, bounceless rigid body cube above
the pool balls (not touching them, of course, but just slightly above them), we could con-
strain the balls to stay on the surface of the table, getting rid of the biggest problem we have:
the jumping pool balls. Figure 7.12 shows our revised rigid body table, with the top cube vis-
ible, though we have it set to invisible normally so we can see the actual pool balls.
This new version of the rigid body table proves to be much more robust, allowing us to
experiment freely with minor adjustments to the initial settings of the cue ball without the
simulation breaking down or balls jumping off the table. Just as important, we can now set
our step size higher again (around 0.01 to 0.005), which makes simulations play back much
faster and thus allows for quicker experimentation.
Now that we have things running fairly smoothly and can experiment more freely with
the settings, we can move into the production phase of the process and try to get all the balls
to end up where they should be after the break is finished.
There Is an additional error in this version of the break: the cue ball ricochets off the
side bumper and travels backward at high speed due to Its backspin. This change of
direction would be appropriate if the bumpers had a rubber surface, but not likely
when they are covered in felt. On doing some audience research, however, we deter-
mined that one has to play back the animation at about two frames per second
before anyone is even aware of the problem. Even then It doesn't seem to bother
anyone because of the chaos of balls bouncing around, so we decided to let this stay
In the animation.
The question is, is this close enough for the shot's needs? To determine this, we first
bake the simulation (choose Edit Keys Bake Simulation) and then play with the result-
ant keyframed animation to see if we can cheat balls into place without the shot looking
unnatural.
210 C H A P T E R 7 • Taming Chaos: Controlling Dynamics for Use in Plot-Driven Sequences
Once the baking is finished, we're left with a whole mess of keyframes, as Figure 7.14
shows, for translation and rotation channels of each of the 16 balls. If we wish, we can elimi-
nate any final small-scale bouncing the balls do by clearing all keyframes on the translateY
channels, but we actually like the smallish bounces the balls make, so we leave them. Some
balls are still rotating slightly at the end of the simulation, so we keyframe a gradual slowing
to no rotation for them.
To speed up playback after the animation is baked, you can remove all rigid bodies
from their respective balls, thus eliminating any leftover calculations Maya needs to
perform(choose Edit Delete by Type Rigid Bodies).
The basic strategy for subtly adjusting the simulation to get balls into just the right
place is as follows.
1. Set up your windows so that
the top view is in the top pane
and the Graph Editor is on
the bottom (see Figure 7.15).
2. Select a ball that's in the
wrong position, select its
translateZ or translateX
attribute in the Graph Editor,
and then look to see where the
last "bump" in the curve lies
(this is the point of the last col-
lision of the ball with another Figure 7.14: The Graph Editor showing the mess of
object). (See Figure 7.15.) keyframes left on the cue ball after baking the simulation
Figure 7.15: Top and Graph Editor panes, showing a ball's translateZ. attribute selected
Working Example 1: A Quasi-Deterministic Pool Break 211
3. Select all the keyframes after this point, and then place the play head on the last frame
of the timeline (so that the top view shows where all the balls end up). (See Figure 7.16.)
4. Select the Move tool, and then, in the Graph Editor pane, drag up or down until the
ball ends up in the right position for the final frame (this ought to be close to its former
position, or the discontinuity will be noticeable in the animation). You will now have a
discontinuity between the selected and unselected frames, as shown in Figure 7.17.
5. Select a range of frames just after this discontinuity, and delete them, creating a
smoother curve between the two sets of keyframes (you need to delete enough frames
to create a smooth curve without noticeable jumps in animation, but not so many that
the ball stops unnaturally at the end of the animation when everything's moving
slowly). (See Figure 7.18.)
6. Test your new animation by scrubbing in the timeline and play blasting the animation.
Pay special attention to whether the ball obviously changes direction or speeds up or
slows down unnaturally, and go back and correct if necessary. Also pay attention to
whether this new animation path causes the ball to pass through another ball!
7. If you decide the ball stops too abruptly as it picks up its simulation keyframes again,
add a keyframe or two to help smooth the transition, as shown in Figure 7.19.
That's about it for doing the correction, though it becomes something of an art to
make these adjustments subtle enough not to be noticed. If any balls are too far away to be
moved during the actual break, we can try to move them during the cutaway to the robots
just after the break. People have an amazingly limited awareness of continuity for chaotic
arrangements of objects, so we should be able to get away with anything we need to at that
stage. (We'll have to do this with the one ball, as it just doesn't move fast enough for us to
move it the great distance it needs to go to get away from the eight ball.)
After all the adjustments, we finally arrive at a usable "simulation," shown in Figure 7.20,
that we can integrate into our animation (see b r e a k T a k e 4 . m o v on the CD for the animation—
and watch how the balls have been moved versus b r e a k T a k e 3 . m o v ) .
Once we complete this stage and are satisfied with the results, we are ready to integrate
the simulation into the rest of the animation and to texture and light the shot for final rendering.
Figure 7.16: Later keyframes selected in the Graph Editor Figure 7.17: Detail showing a discontinuity between
selected and unselected frames
Figure 7.18: The animation curve in the Graph Editor Figure 7.19: Detail showing an added keyframe to help
after deleting several keyframes smooth an otherwise too abrupt velocity change
212 C H A P T E R 7 • Taming Chaos: Controlling Dynamics for Use in Plot-Driven Sequences
Figure 7.20: Final positions of the pool balls after being manually adjusted
Figure 7.21: The final version of the break, including lighting and texturing (see the full animation
on the CD)
effect desired. Figure 7.22 is a snapshot of the front of the building, which includes the ele-
ments which need to be modeled.
The problem of convincingly shattering a large amount of glass is a bit different from a
game of pool. Fortunately, we do not need to worry about the final position of any particular
shard of glass. Unfortunately, there are a large number of shards, with unique shapes and
sizes, that will behave differently during collisions, which presents a particular challenge for
the computer's processor. The larger pieces of glass will also need to shatter upon their
impact with other objects. This secondary shatter is complicated by the fact that we are using
dynamics.
their eyes on any one detail. Avoiding long scrutiny of the shot by the audience will
assist the illusion of reality.
Is there room to cheat the shot? Yes and no. The shot can be cheated at the cut from
side view to front view, but any cheating will be highly time intensive due to the large
number of objects that may need to be modified. It is best to get this one just about
right from the outset.
Why use dynamics? Because it is the only animation method that fits production time
and provides the level of realism desired. When dealing with a single irregular object or
a smaller number of regular objects, keyframing is probably faster and more accurate
than dynamics. This project deals with hundreds of irregular objects, which react not
only to the ground, but also to each other.
These questions show that we have little wriggle room in this animation. The resulting
shatter must be as real as we can possibly make it. The complexity of the situation prevents
us from easily fudging the position of any objects. The use of dynamics techniques removes
most primary controls from our toolkit. We are left using dynamics elements, some restricted
keyframing, and expressions to control this chaotic mess of a scene.
On our side, we have the audience's propensity to overlook minor incongruities. The
speed and complexity of this portion of the animation (about two seconds) are also great
advantages. Because the animation goes by so quickly and because there are so many objects,
it is extremely unlikely that anyone will notice two pieces of glass intersecting for one frame.
The audience's distance from any one element of the animation also helps us avoid scrutiny.
For inspiration during the creation of this shot, we are forced by our nonexistent
budget to go watch a bunch of action flicks with explosions and shattering. Poor us.
Working Example 2: Shattering Glass Over a Live-Action Plate 21 5
You can easily freeze the transformations by selecting an object and choosing
Modify Freeze Transformations. Beware! This will permanently change all of the
translation, rotation, and scale settings to their default (zero or one) while leaving
the object in place.
computer has a hard time untangling two objects that become thoroughly imbedded in one
another over the course of a single frame.
You may find that creating a second copy of passive rigid body collision objects and
reversing the normals is an effective way to prevent interpenetrations. This is espe-
cially true when using planar rigid bodies. Unfortunately, this does produce an extra
interpenetration error at every frame, because it creates two coexistent rigid bodies.
The simplest, yet most aggravating, problem that we come across is the interactivity of
the simulation—or lack thereof. Without memory caching enabled, stepping from frame to
frame to examine the results of the simulation takes an extraordinary amount of time. With
memory caching enabled, the cache must be deleted for every object each time that a change is
made. Additionally, as the scene gets more complex, we find that the cache seems a bit less
reliable. One major source of interactivity problems is evident when the animator steps back-
ward in time. Unless we return directly to the beginning of the simulation, pieces often end up
in the wrong places when going backward in time. We believe that this stems from the
method used to delay the glass shatters: turning the activity of the rigid bodies on upon reach-
ing the frame at which the glass needs to break. We use a few tricks to let us move around in
our simulation. The Graph Editor is one of these. Since we are keying the activity of our
active rigid bodies anyway, we can move the activity inducing keys beyond the frame where
changes are needed. Doing this bypasses the dynamics calculations and lets us move around in
our own animation without using a supercomputer. Of course, this technique is quite useless
if the changes depend on objects being in some state other than their initial state.
Figure 7.24: The structure before deformation Figure 7.25: The structure after deformation
explodes away from the structure. Figures 7.24 and 7.25 show the effects of deformation on
the structure. A play blast of the structure's deformation is also available on the CD as
structureDeform.mov.
We now need to consider several methods for controlling the primary shatter. The most
obvious method is to use force fields, but other methods include invisible keyframed passive
rigid bodies, initial conditions, and impulses. The force fields prove to be finicky and take a
lot of adjustments to the attenuation and magnitude attributes to achieve the results desired.
Impulses and initial conditions do a good job of getting that first explosive movement
imparted to the glass. We make use of all of these methods to ensure that the glass shards
move as desired.
When working with force fields, always use the Dynamic Relationships Editor to
make sure that the fields are linked to the rigid bodies you want them to affect. A
good shortcut is to select the bodies you want to affect when making a field. This
turns on the links between those bodies and the field by default.
The more difficult effect to achieve is the secondary shatter. The issue here is that glass
shards will break again when they hit the ground—this is what we are calling the secondary
shatter. The primary shatter is, of course, the first time the glass breaks and falls out of its
supporting structure. This is problematic due both to timing and positioning. The secondary
shards have to appear in the same position and orientation as the parent shards when they
are made visible (and the parent panes made invisible). We try parenting, keyframing, using
the cache, expressions, and constraints. Parenting doesn't turn out to be useful, because there
is no easy way to unparent on the fly without compromising the placement of secondary
shards. Keyframing is not practical due to the sheer number of times the cache will have to
be deleted. Expressions work, but take an annoyingly long time to add.
In the final result, the secondary shatters are difficult to notice and play a more
subconscious role than intended. Therefore, we suggest sticking with the simpler
primary shatter when doing work that will be viewed from a distance.
218 C H A P T E R 7 • Taming Chaos: Controlling Dynamics for Use in Plot-Driven Sequences
Figure 7.26: Single shard shot just before secondary shatter Figure 7.27: Single shard shot shortly after secondary shatte
The method we eventually come up with is rather complex. By using two sets of sec-
ondary shards, we are able to treat one set as a single active rigid body so that it looks like a
whole piece of glass. The other secondary shards are all constrained by point and orientation
to the first set of shards. After adding the constraints (and the order this is done in is impor-
tant), these shards are also made into passive rigid bodies. Of course, if we just leave them as
is, we will have an awful mess of interpenetration warnings. To avoid this, we set the second-
ary shards to a different dynamics layer by changing the collision layer attribute in the Chan-
nel Box. All rigid bodies with matching collision layer numbers (integers) will collide, but
rigid bodies with different collision layer numbers will not interact. For added functionality,
we have collision layer -1, which is a global collision layer. If we set the secondary shards to
a new collision layer, they will never collide with the primary shards, so we need to keep that
in mind when reviewing our work for errors later. The last two things we have to do are
switching visibility between the two sets of shards and activating the second set of rigid bod-
ies upon impact. Unfortunately, these will require expressions.
We have several methods available to us for detecting collisions within secondary
shatter expressions. We could turn on collision information caching within the rigidSolver
node using the Attribute Editor, but that slows the simulation even further. Instead, we
choose to base the activation of these expressions on simple physics—activating the
secondary shatter when the shard's velocity on the y-axis becomes positive (when it
bounces). Figures 7.26 and 7.27 illustrate what is meant by a secondary shatter. Figure 7.26
is the frame just before a secondary shatter, and Figure 7.27 is a frame shortly after that
secondary shatter. A play blast of the results, p a n e S h a t t e r . m o v , is on the CD. The Maya
binary file for a simplified example of a single pane shatter ( s i m p l e P a n e E x a m p l e . m b ) and a
general expression, s e c o n d a r y . t x t , are on the CD.
With the research done for a single pane of glass, it is time to put all of them together.
side. In the interest of getting the most feedback as quickly as possible, we complete the pri-
mary shatter and initial explosion for every pane of glass first. This will give us a good idea
of the overall feel of the scene, and let us better assess how many secondary shatters we actu-
ally need. We begin by establishing that the structural elements near the panes of glass will
need to be passive rigid bodies: it just wouldn't do to have a pane of glass fall right through a
steel support beam. Of course, the ground and masking arch also need to be passive rigid
bodies so that glass can collide with them.
Next, we perform crack shatter effects on each of the panes of glass (in the Dynamics
menu set, choose Effects Shatter and adjust settings for the Crack Shatter effect). During
this step it is imperative to make sure that the shards produced by each crack shatter are
believable. This effect can produce some crazy-looking shapes. It is also important that we
avoid producing any puzzlelike shards. If two shards fit together like pieces in a jigsaw puzzle,
an ugly number of interpenetration errors will occur as they begin to move against each other.
With this done, each piece is given a mass roughly corresponding to its size. This is done
by eyeball and guesswork, as it does not need to be precise. We then give the pieces an initial
velocity to kick them out of the frames and add cone shaped radial force fields to help make
the shatters diverge some. Shards that need a stronger acceleration are also given an impulse
for a few frames. For the finishing touch, we add a gravity field, which has an appropriate
magnitude for the units used. Maya's gravity fields default to 9.8, which is the correct magni-
tude when dealing with units in meters. This simulation was built using inches, because that
was the unit the original modeling information was recorded in, so we used 386 instead (the
acceleration of gravity in inches per second squared). As always, the only thing that matters is
what makes the scene look right, so feel free to experiment with gravity.
Since many of the window panes are broken, not by direct contact, but by the bending
of the frame around them, we need to determine a timing sequence for activating these
panes. This is done by playing the animation of the structure's bending frame by frame and
deciding when each pane's window frame is mangled enough that the pane would shatter.
Before moving on to secondary shatters, we have a niggling bit of unreality to squelch:
when glass windows break, some of the glass remains within the frame, and some pieces are
merely loosened and fall a bit later than the rest of the window. For these effects, we will
select some likely shards and make them passive rigid
bodies. For some of these, we will add keyframes on
the activity attribute to release the loose shards a bit
after the rest of their windows have fallen out. This is
a simple, but important step. The fact that this speeds
up the animation a bit is just gravy.
Performing a secondary shatter on every shard
would be nice, but it's just not practical. Instead, we
must determine which shards are large enough to
attract the audience's eye and deal with those. To avoid
later headaches, we select those pieces we want to per-
form secondary shatters on and color them differently.
The pieces chosen are in yellow in Figure 7.28.
With the pieces deserving of an extra helping of
digital aggression clearly marked, it is time to begin
the secondary shatters. These shatters are performed Figure 7.28: A shot of the model with shards selected for
according to the recipe we came up with during our secondary shatter colored yellow
220 C H A P T E R 7 • Taming Chaos:Controlling Dynamics for Use in Plot-Driven Sequences
Figure 7.29: A screen capture of the hypergraph before Figure 7.30: A screen capture of the hypergraph after
adding secondary shatters adding secondary shatters
research and development work. The only real problem is that the hypergraph gets a bit
more complex when dealing with this many shards, as illustrated in Figures 7.29 and 7.30.
We find that the use of a different modeling layer for the secondary shards helps to
select and identify all the shards involved.
As you can see, most of the brain work was done in the research step. The production
step is mostly following a recipe and checking to be sure that it looks right. Only after deter-
mining that the primary and secondary shatters are performing well should we consider bak-
ing the simulation. This example is so complex that tweaking things is going to be more effi-
cient within the realm of the dynamics simulation than it will be once things are keyframed.
After baking, we check to ensure that the scene has turned out as desired and make any nec-
essary changes as long as they can be done quickly. If major problems crop up, it's back to
dynamics with this one.
After assembling the final scene, it is rendered and composited with the live action
footage. Figure 7.31 shows a still from this scene (an animated version, b r e a k O u t F i n a l . m o v ,
is available on the CD).
Summary
One thing should be abundantly clear from this chapter: dynamics simulation, while most
assuredly not the same as keyframed animation, is really more an art than a science. Sure, it
helps a lot to have a good sense of how the physics of reality work and to like playing with
numbers, but the real trick to making Maya's dynamics work for you (instead of the other
way around!) is to know when and how to break the rules to get the simulation to behave
robustly and eventually give you the right data to include in your larger project.
If nothing else, having control over your own little universe teaches you a lot about the
real one we live in, explains why activities like playing pool are so frustratingly difficult to
master, and explains why you can't put a broken pane of glass back together.
Complex Particle Systems
Habib Zargarpour
Industrial Light and Magic
Figure 8.1: The Andrea Gail as seen in the entirely computergraphics (CG) test shot used to deter-
mine the feasibility of creating the visual effects
developing 3D animatics, and involving as much R&D as can fit in the budget or timeframe
to be able to create the necessary effects. In this section, I'll discuss how to best plan for a
project with these preparations and why they are necessary.
Gathering Reference
In any kind of project, clarifying what you are trying to create is important. Whether you are
re-creating reality and nature on a project such as The Perfect Storm or creating never-
before-seen space anomalies for a Star Trek project, you need visual reference material that
describes what the effect should look like. In the case of re-creating nature, the reference
indicates exactly what needs to be done and will haunt you throughout the project as you
discover just how difficult and complex re-creating nature really is. In the realm of science
fiction, the reference helps to establish guidelines that ground and define an effect that might
otherwise be elusive. The reference saves a lot of time that would otherwise be wasted in try-
ing to define the look. In visual effects, indecisiveness can be costly, so you want material
that you and the client can agree on as a direction.
For science fiction projects, good artwork can also replace reference footage, especially
if the client wants animation that looks like nothing anyone has seen before. I place a lot of
importance on gathering good reference material, particularly if I am replicating something
in nature. The pursuit of creating a natural phenomenon is a never-ending process: you will
always want to improve it, and future projects will inevitably need it. For me, gathering ref-
erence does not end in preproduction but is an ongoing process. If you have a large crew,
organizing the reference and making it accessible online to the entire crew is always essential.
Preproduction: Research and Development 225
Figure 8.2: An early phase of the CG ocean and environment used in the CG test shot in Figure 8.1
Figure 8.3: Live-action footage from The Perfect Storm, which I used as reference to re-create crest-
Mist. This shot is one of only two in the film that have real stormy seas. The Perfect Storm Copy-
right 2002 Warner Bros., a Time Warner Entertainment Company, L.P. All rights reserved.
are separated from the surface of the water, blown off the top of the waves, and carried away
with the wind. What makes this element challenging is how it transitions from dense water
into light spray and vaporizes.
if you haven't already get The Perfect Storm on DVD. it was created from a digitally-
timed version of the film and is a great way to see the examples in this chapter in
motion.
Early in the project's R&D phase, the crew took a field trip on a 50-foot fishing boat
out beyond the Golden Gate Bridge to experience firsthand what they were trying to re-cre-
ate, minus the 80-foot waves and hurricane winds! The 6-foot seas actually did get quite
rough (see Figure 8.4). While that doesn't sound like much, it was enough to make half the
226 C H A P T E R 8 • Complex Particle Systems
Early in the project's R&D phase, the crew took a field trip on a 50-foot fishing boat
out beyond the Golden Gate Bridge to experience firsthand what they were trying to re-cre-
ate, minus the 80-foot waves and hurricane winds! The 6-foot seas actually did get quite
rough (see Figure 8.4). While that doesn't sound like much, it was enough to make half the
crew seasick. In the image, you can see the splash resulting from the bow of the ship slam-
ming against a wave. The splash consists of chunks of water that spread out and transform
into smaller drops of water. The wind will cause the splash droplets to become even finer,
turning them into mist. At one point in the research, I went up in a chopper to get images of
breaking waves directly from above. Figure 8.5 shows a still from the chopper, which at one
point got within 3 feet of the water surface. A word to the wise: never challenge a helicopter
pilot on how low they can fly!
In this chapter, when we refer to wave height in feet, we are measuring from the
trough (the lowest point) to the crest (the highest point) of the wave.
Interpreting Storyboards
Other than the script, you will usu-
ally also need storyboards in the
bidding and preproduction phase
to assess exactly what needs to
happen in the shot or sequence.
You need to know how to interpret
the storyboards, what to look for
that can help you predict prob-
lems, and how to use storyboards
to solve these problems. You can
use storyboards to answer the fol-
lowing questions:
• Is the camera going to be
moving?
• How large is the effect going
to be onscreen?
• Will other objects or actors Figure 8.4: Image from a fishing boat about the size of the
be in front of the effect? Andrea Gail
Preproduction: Research and Development 227
• How close are we going to be to the effect and how much detail will be visible?
• How many more shots are there like this one?
• Will this shot be cut in between other shots with practical effects?
• How many 2D and 3D shots are there going to be in total?
• Is the effect going to be coming at the camera? This is the most important question if
the effect involves particles.
You should be able to answer most of these questions from the storyboards, with the
exception of the camera motion if it is not indicated. A particle effect that needs to come toward
the camera is an immediate red flag in terms of resources, R&D time, and rendering effort.
In the example storyboard in Figure 8.6, you can see the letters "VFX." indicating that
this shot will need some visual effects and cannot be done entirely as a practical or live-
action shot. You can see that actors will be close to the camera in rough water. The arrows
indicate that the camera will tilt down to follow the jump of the rescue diver.
If you are lucky, you will have access to 3D animatics from a process called "previs,"
which stands for pre-visualization. Previs, which are becoming quite popular, usually consist
of rough 3D animations that are like moving storyboards. They indicate the action that takes
place, the composition, and what the camera sees. Good animatics also include accurate lens
measurements as seen at the bottom of Figure 8.7.
For The Perfect Storm, we created the animatics ourselves as part of the preproduction
phase, and we took advantage of the opportunity to make them as accurate and realistic as
we could. We used physically correct boat dynamics and the actual ocean simulation data
that we were going to use in production. Thus, creating the animatics became the way we
Figure 8.5: A photograph taken from a chopper directly over some breaking waves. Open-
sea-breaking waves are different from shore-breaking waves, but the foam left behind is similar.
228 C H A P T E R 8 • Complex Particle Systems
animated the shots for the real show, and then we refined the movements from the simula-
tions as necessary. Our animators could "steer" the boats by animating the rudder and con-
trol the throttle to get realistic
results. It might seem simple to ani-
mate a boat over waves, but as we
learned from trying to refine the
simulations, it's an exact, unforgiv-
ing science. Our animatics were a
good indication of what the cam-
era would be doing in a shot and
also gave the stage crew a good ref-
erence for the continuity of the
action when they were shooting
the boats on stage.
So now you have seen the
storyboard and the animatic. Fig-
ure 8.8 shows a frame from the
final result, in which all the plan-
ning paid off. It answers some of
the questions posed earlier:
• The camera will be moving.
• The effect will cover 75 per-
cent of the screen.
• Actors will be in front of the
effect with medium detail
visible.
• The particle effects will not Figure 8.6: A storyboard (by storyboard artist Phil Keller)
be coming at the camera. from the chopper rescue sequence of The Perfect Storm
Figure 8.8: The final composite of the shot in Figure 8.6, shown here with the full height of the
action as the camera pans up and back down. You can see a breakdown of this shot by viewing
m r l 3 5 1 0 4 _ b r e a k d o w n 2 . m o v on the CD-ROM. The Perfect Storm Copyright2002 Warner Bros., a
Time Warner Entertainment Company, L.P. All rights reserved.
To better understand how some of the elements connect, take a look at Figures 8.9,
8.10, and 8.11. The blue-screen image in Figure 8.9 is used in the foreground for the water
and the crew in it, but all the water and activity from a few feet in front of them to the hori-
zon was computer generated to match. The biggest challenge here was to find a piece of
wave from more than 50 simulated oceans that would match the live-action movement of
the water filmed in the tank. This was done by our animators who meticulously tried match-
ing section after section of ocean until one of them finally worked. In the future, one can
imagine simulation software that would automatically model the foreground ocean and
extrapolate the rest of the surrounding ocean. The CG ocean in Figure 8.11 is composited
with the CG mist in Figure 8.10, along with a dozen other CG elements not shown here, to
give the final composite in Figure 8.8.
The question of whether a shot will be cut in between other live-action shots with prac-
tical effects is important because then you know your effects need to match the surrounding
footage. This usually means a lot more hard work because the results have to be seamless;
matching anything is difficult, let alone matching reality.
230 C H A P T E R 8 • Complex Particle Systems
Figure 8.9: This blue-screen element of the Mistral crew shot in the 100-by-95-foot indoor tank, the
largest of its kind at the time, and some light foreground mist are the only practical (real) elements in
the final composite shown in Figure 8.8.
Sometimes the research for one effect can be applied to another. In developing crest-
Mist, we realized that many other types of mist could benefit from some of the work, such as
the following:
• crestWisp: mist blown off small waves of about 3 feet or smaller
• mist: atmospheric mist that included background, mid-ground, and foreground mist
• troughMist: mist that traveled near the ocean surface like a blanket
• splashMist: mist coming off a splash from the boat pounding the waves
• chopperMist: resulting from the blast of air generated from the rescue helicopter's blades
By now you have noticed all the unique names for various elements. I'm a stickler for
naming conventions, and since we were going to create many elements, we might as
well name each one in a way that we would know which part of a shot we were talk-
ing about when communicating in dailies, shot reviews, and e-mail. The names
should be simple and short and convey clearly what the element is.
Figure 8.10: The CG crestMist generated for this shot had Figure 8.11: The CG ocean generated for this shot had to
to be combined with the mist generated from the CG heli- match and connect with the waves in the live-action
copter seen here together. footage in Figure 8.9, including the motion blur.
has completely unknown repercussions. It is still important to outline the possibilities in terms
of technique and estimate how many resources and how much time each will require.
You can always approach a problem in more than one way, but one technique will
always be better than the rest. You need to determine how many people it will take to com-
plete the R&D for each task. You then use this estimate to determine the budget for the
R&D phase along with the resources and time required. You're in luck if the project is simi-
lar to work done in the past, but frequently this is not the case because directors want more
and more to include never-before-seen effects in their films.
You might need to create entirely new simulation engines and, in some cases, custom
renderers, which is why it's important to communicate with your software group at the plan-
ning stage (or create a software group if you don't have one). If the software group is aware
of your challenges and issues from the start, they may be able to anticipate some of your
needs. The number of shots in which a particular effect will be used determines how stream-
lined the process needs to be. If the effect will only be used in one or two key shots, spending
resources to allow it to be easy to use or understand doesn't make sense. On the other hand,
if the effect will be used in hundreds of shots, it is clearly worthwhile to spend resources on
streamlining, documentation, and testing in preproduction.
if the effect you need appears only in one shot, you might consider a practical ele-
ment because the development costs may far outweigh the one time the effect will
be used. The more shots and camera angles in which you need the effect, the more a
CG element will pay off. Of course, if what you need cannot be done using practical
means, that makes the decision easier.
Choosing the Best Method: Which Type of Simulation to Use 233
Figure 8.13: An all CG shot from The Perfect Storm. The Perfect Storm Copyright 2002 Warner
Bros., a Time Warner Entertainment Company, L.P. All rights reserved.
If the director says, "I want it to look like steam coming out from a nozzle and dissi-
pating naturally," your best bet is to shoot a practical element. On the other hand, if the
director says, "I want it to look like nothing we've seen before, and it needs to wrap around
the actress's head and then vanish to the left," clearly CG is your best bet. If you need realis-
tic gas interacting with a computer-generated character or object, in most cases you're better
off attempting to create the element in CG.
The Perfect Storm trivia: From the more than 400 visual effects shots in the movie,
roughly 200 involved 3D work such as adding the ocean, and of those, 96 shots
were entirely CG, including the boat, ocean, splashes, mist, rigging, sky, light-
ning, drips, and crew (see, for example, Figure 8.13). Many of these shots show
the boat almost filling the screen and are cut back to back between live-action
shots filmed on set in the 100-by-95-foot tank on Stage 1 6 at the Warner Bros.
Studios in Burbank, California.
For many of the shots in The Perfect Storm, the nature of the action demanded detailed
integration of the motion of the waves with the movement of the boat and the water splashes.
The most logical approach for these shots was to make everything digitally since the ocean,
splashes, sky, and atmospheric haze were already digital. Figure 8.13 shows an example of
such a shot in which the boat and crew are also digitally generated. To see a moving version of
Figure 8.13, visit www.howstuffworks.com/perfect-storm.htm.
234 C H A P T E R 8 • Complex Particle Systems
figure 8.14: The example crestMist reference with the various parts named. The Perfect Storm
Copyright 2002 Warner Bros., a Time Warner Entertainment Company, L.P. All rights reserved.
Choosing the Best Method: Which Type of Simulation to Use 235
doing it that way helps with the process of elimination. If you are lucky, some of the larger
decisions will have already been made by your visual effects supervisor, such as whether the
show will be primarily done with miniatures, full-scale practical effects, or CG. If you are the
visual effects supervisor, good luck.
For The Perfect Storm, the decision was based on the minimum scale of water in
miniatures and the size of the waves that needed to be re-created. Clearly, using a real stormy
sea with 150-foot waves was out of the question since you would have to wait another cen-
tury and you would likely not survive it, let alone film it. In fact, we had a difficult time find-
ing good reference footage because, for some strange reason, when people's lives are in dan-
ger, filming is the last thing on their mind! What were they thinking?
In the movie, two shots were filmed on a real stormy sea (about 10-foot waves). All the
rest were either shot on stage in a tank with CG ocean extension or were entirely CG shots.
Water, like fire, is an element with which people are familiar, and you can't reduce the scale of
a miniature past a certain point without the drops of water looking disproportionately large.
I recall a classic film from the late '70s or early '80s that used miniature water effects;
the droplets were about the size of people's heads. The smallest scale you would want to use
with water involving splashes would be one-fourth or one-fifth scale. At one-fourth scale, a
100-foot wave is 20 feet high—not much of a miniature.
In The Perfect Storm, we also needed to cover many square miles of ocean, which
would be a vast area even at one-fifth scale. So the logistics of composing and creating this
deadly and hostile environment and carefully designing each shot pointed to primarily using
computer-generated oceans and waves. In the next section, I'll discuss the details of how we
created stormy seas on the computer.
For completeness, here are the options we considered when deciding how to create the
crestMist effect:
• Full-scale practical elements
• Miniature elements
• Computational fluid dynamics integrated within vendor software such as Maya
• Computational fluid dynamics using a standalone in-house tool
• Particle simulation in vendor software such as Maya
• Particle simulation in Maya with custom field plug-ins
For example, the director asks you to increase the height of the waves in a particular
shot. This request will clearly mean that the motion of the boat will have to be re-simu-
lated/animated. The resultant splash from the boat impacting a wave will be bigger and
cause too much mist to cover the boat. Now if the entire system is one big fluid dynamic sim-
ulation, you would have no direct control over individual splashes or events; you would con-
trol only general global parameters such as viscosity, wind, and gravity.
In movies, there is no such thing as "well that's just what would really happen, so sorry
we can't change it." The director has a certain vision for each shot and wants control over as
many aspects as possible (additionally, most controls are used at the request of the visual
effects supervisor to hone in a shot). Directors have learned to accept the limitations of the
real world when involved with practical live-action effects, but they are choosing to embrace
the realm of digital effects with the promise that anything is possible and everything can be
changed. It is our job, when setting up such effects, to ensure that we can adjust all the
important aspects of an effect. By separating the elements logically, we help both the artist
and the director to attain their goals. Figures 8.15 through 8.22 show several of the dozens
of elements that composed the shot from Figure 8.23.
Figure 8.15: Eight elements of the all-CG shot from Figure 8.13. The gray shaded version of the
Andrea Gail.
Figure 8.16: The Andrea Gail rendered with textures and materials
Choosing the Best Method: Which Type of Simulation to Use 237
Figure 8.17: Close-up of the ship showing the digital stunt doubles and simulated buoys
Figure 8.18: The simulated ocean rendered with all the various types of foam, including the wake
from the boat
Figure 8.19: Run-off pass of water from previous waves hitting the boat. You can see this element in
motion by viewing p w l 7 3 0 2 0 _ r u n O f f . m o v on the CD-ROM.
238 C H A P T E R 8 • Complex Particle Systems
Figure 8.20: Splashes around the boat generated by its movements against the ocean surface. You
can see this element in motion by vieiving p w l 7 3 0 2 0 _ s i d e S p l a s h . m o v on the CD-ROM.
Figure 8.21: The splash created by a wave crashing onto the boat
Figure 8.22: Volumetric render of the light beams from spotlights on the boat
Choosing the Best Method: Which Type of Simulation to Use 239
Figure 8.23: As you saw earlier in Figure 8.13, here is the final frame of the shot with all the
elements integrated. The Perfect Storm Copyright 2002 Warner Bros., a Time Warner Entertainment
Company, L.P. All rights reserved.
Pros Cons
In the case of crestMist, one of the requirements is to cover miles of ocean with waves
of up to 150 feet in height. A key factor in using fluid dynamic simulations is the size of the
volume it needs to cover. Within this volume, the amount of subdivision of the grid deter-
mines the level of detail you get from the simulation. For example, a volume that is 1 mile by
1 mile by 150 feet needs a grid that is 5280 by 5280 by 150, which results in severely slow
simulations that take enormous amounts of memory if you want the finest level of detail to
be 1 foot. In practice, because of the low visibility in stormy conditions, we rarely had to
generate crestMist beyond that range. There was the possibility of using multi-resolution
nested simulations, but we would need to track each and every wave that was generating
mist with a high-resolution grid, which would have been very complex.
The advantage of using particle simulation is that the level of detail of the motion can
be as small as you want over as large an area as you want. The disadvantage over the fluid
dynamic grid simulation is that you will need many particles to establish a gaseous misty
look that can be done easily with volumetric rendering of grids. The other advantage of
using fluid dynamics is that the ocean and air are simulated at the same time, forcing the mist
and waves to move together cohesively within the same space. Using particle simulation, you
would have to generate artificial forces that would allow the particles to respond to the ris-
ing and falling of the ocean surface. For The Perfect Storm, we were simulating the oceans
separately over a coarser grid that was fine enough to define the broader waves but not fine
enough to get detailed mist turbulence. Eventually we chose to use particle simulation as our
primary technique for crestMist. Using particle simulation allowed us to get the large cover-
age we needed with as much detail in the motion of the particles as necessary without having
to deal with exorbitantly high simulation times.
Visualization Techniques: Tips and Tricks 241
I often use color encoding to visualize the multitude of attributes and data on thou-
sands of particles. This involves the use of a temporary runtime rule to place the information
you are trying to see directly within each particle, as shown in Figure 8.24.
The following is an example of such an expression:
v e c t o r $vel = v e l o c i t y ;
rgbPP = <<#vel.y/5.0, a g e / l i f e s p a n , r a d i u s * 1 0 . 0 > > ;
This example uses the color red to measure the upward velocity of any particle. Because the
components of vector attributes cannot be accessed directly within MEL expressions, we
assign the velocity vector to a temporary vector variable called $vel. You must scale different
variables in order to fit them within the 0 to 1 range of a color; in this case dividing velocity
in Y by 5 and multiplying radius by 10 normalizes each value appropriately. The parameter
age divided by 1 i f e s p a n is always within the 0 to 1 range
Figure 8.24: A sample image showing the use of color in a splash simulation in The Perfect Storm
Visualization Techniques: Tips and Tricks 243
example, for The Perfect Storm we used spline controllers. Spline controllers are simply nodes
that are curve primitives built into specific shapes that are instantly recognizable for what they
do. If you are creating a custom plug-in such as a force field, you can also create your own cus-
tom manipulators that don't need a separate node to contain them. In Figure 8.25, you can see
the use of spline controllers shaped into simple arrows to control the wind and wave direction.
You can also see the connections between the spline controllers and the expression node in the
Hypergraph.
We used color coding to show which arrow was for which function. In this case, we
used the larger purple arrow to set the wind direction, and we used the smaller green arrow to
set the wave direction. The wave direction arrow was a slave to the wind direction by default,
since this is generally the case in stormy sea situations. However, for creative cinematic rea-
sons or in certain situations, you can set the two directions independently, sometimes even in
opposite directions, by simply breaking the rotational constraint between them.
Figure 8.25: Spline arrows used to control the wind and wave directions in a scene in The Perfect
Storm. The use of color coding was important. Here the purple arrow controls the wind direction,
and the green arrow controls the wave direction.
244 C H A P T E R 8 • Complex Particle Systems
Figure 8.26: The centralized control node for crestMist is a spline in the shape of an E.
attributes in one place. To use an attribute, simply select the appropriate E expression node
in the Perspective window without even opening the Outliner to look for it. In Figure 8.26,
you can see the attributes added to the crestMist expression node in the Channel box. The
w i n d A m p attribute controls the overall wind force, and s p r a y U p A m o u n t is the initial upward
velocity.
Units are not important here. These units are simply the numbers that gave the
best visual results for the average wave height. We had to adjust the numbers for
each shot, depending on the wave height and the violence level of the storm at
that particular time in the film.
As you develop your effect and use it in a test shot, it will become evident which of the
attributes you are accessing most frequently. Make a list of these attributes and start priori-
tizing them so that you can select only the most important to be connected to the centralized
control node. You don't want to connect too many attributes for two reasons:
• It will get too confusing.
• The evaluation order in Maya can be affected by the connections in the Hypergraph, so
you might modify the original behavior of your setup if you go overboard with the
connections.
Once you have made your top attribute list, organize the items into groups that make
sense, and add them to the centralized control node in that order so that they appear in the
Channel box with that organization. For example, if you have a global switch to turn the
expression on or off, that attribute should be either at the very top or at the bottom of the
Visualization Techniques: Tips and Tricks 245
list. Vector attributes need to stay together in an X, Y, and Z order, top to bottom. As your
testing continues, you can add or remove attributes as they become more or less important
as part of the centralized control, and by the time you use the attribute on a real shot, it
should be just what you need.
The following is an example of a particle runtime expression using the global variables
windDirection and windAmp:
v e l o c i t y += $ w i n d D i r e c t i o n * $ w i n d A m p ;
When you use global variables, you can access them anywhere in the scene, and you
don't risk modifying the evaluation order in the Hypergraph because you haven't made any
direct links between two nodes' attributes.
When multiplying a vector, be sure that you don't multiply it by another vector
unless you are really interested in the dot product of the two, because that's what
you'll get. A dot product is essentially the projected length of one vector onto
another.
Figure 8.27: The Channel Box with only the relevant attributes of the node visible
In Figure 8.27, you can see an example of the attributes on the main controller node for
crestMist. The regular transform attributes such as t r a n s l a t i o n , r o t a t i o n , and s c a l e have
been hidden, and only the added user-defined attributes are visible. At the top of the Channel
Box, the most important attributes are placed, such as the s p r a y U p A m o u n t , which determines
the initial velocity of the mist, and w i n d A m p , which is the global wind speed. Some of the
attributes are simply connections to the same attribute on another node and are connected
here to make them easier to access. For example, here the L O D attribute is connected to the
1 e v e l O f D e t a i 1 attribute of the p a r t i c l e S h a p e node. The LOD simply reduces the particle
count as a percentage so you can see faster simulations with a fraction of the particles without
changing all your emitter rates. An L O D of 0.5 would mean that Maya will keep only 50 per-
cent of all the emitted particles and discard the rest.
you would be keyframing each particle by hand! The whole concept of using simulation is to
try to control millions of tiny elements by using broad global controls and rules. The answer
to how much control you will need depends largely on the director. Some directors are more
easy going than others about the details in a shot. The realistic nature of what you are trying
to create will also determine how much control you will need. You will have to take com-
ments such as "make it scarier" or "it needs to feel menacing" and interpret which controls to
change on your simulation to fulfill the request.
Advanced Techniques
As you embark on your R&D task, you have to determine which forces you need and how
much control you need over them. In this section, we'll describe some techniques and give
you some ideas about how to approach the problem. How you think about a problem will
determine the quality of your solution. Knowing the forces you'll need and creating the
forces you don't readily have available can make or break an R&D project.
248 C H A P T E R 8 • Complex Particle Systems
Figure 8.29: The top view of a section of the ocean showing the crestMistparti-
cles emitted at the crest of select waves
Figure 8.30: Images of the triple emission cycle in one of the attempts to add structure to crestMist.
ParticleA emits particleB, which then emits particleC. This method ultimately gave too much struc-
ture, which looked like snow.
Advanced Techniques 251
Figure 8.31: The sum of three force vectors from three Figure 8.32: A turbulence field needs to move slightly
different fields and the resultant vector faster than the particles blown by a wind field.
Figure 8.33: A frame of the crestMist simulation showing Figure 8.34: A frame of the crestMist simulation showing
the separating gap between the mist and the top of the wave the separating gap fixed
Figure 8.35: A CG mist element that used 10 times the number of partides in a
tornado from Twister
Rendering Techniques
Simulating particles is one-half of the battle; rendering them is the other half. When you ren-
der particles, you can choose from many options. You can render particles within Maya
itself, or you can export them and render with other vendor software such as RenderMan
and Mental Ray. If you have sophisticated programmers, you can even write your own cus-
tom particle-rendering software, which is what we did at ILM for the movie Twister. Later
we improved the renderer to handle the extreme particle counts in The Perfect Storm. Within
Maya is the option to use the hardware renderer, which is great for light dust elements and
gives you nice sub-frame motion blur, or you can use the software renderer, which can create
effects such as volumetric clouds and blobbies. Maya 4.5 has a whole new fluid dynamics
engine that has its own special shaders for rendering smoke and gaseous elements.
To render the mist you see in Figure 8.35, we quickly found out that we needed 10
times the number of particles that it took for a tornado of about the same size on screen in
Twister. This situation was unexpected and was due to the nature of mist. Individual
droplets had to be subliminally visible while looking gaseous. The finer the mist, the more
particles we needed to make them visible. Another aspect of rendering the mist was to keep
the solidity and density at its root where it rises from the ocean foam. Figure 8.36 shows an
element as seen from above; each small wavelet at the crest of the wave creates its own dense
crestMist.
Frequently, water splashes are not just white blobbies smeared by motion blur. There is
a lot of detail and specular highlights from the drops of water as they are exposed on film
during the shutter open phase. If you study photographs of splashes when a light source such
as the sun or a spotlight is present, you'll notice the following:
• Drops rarely appear as a perfectly straight line. They usually appear as a curved line
that is part of the parabolic trajectory they are traveling along.
• There are many bright highlights along the curve, and the edges are rough and uneven.
The overall color appears gray or even transparent, depending on the amount of foam
within the droplet.
Advanced Techniques 255
Figure 8.36: A CG particle rendering of a crestMist element. The camera is looking down into the
ocean surface.
You can observe this phenomenon by watching water spray out of a garden hose on a
sunny day or by watching water spray from a showerhead. Sea water tends to have a lot
more foam content, which makes the water droplets more murky and white and reduces the
transparency. The specular highlights are still brighter than the overall color of the foamy
droplet. Figure 8.37 shows an example of how we developed custom rendering tools to be
able to represent this kind of splash. In some cases, we used this technique when rendering
mist, for example, when a search light was aimed at the mist.
You can use the same method to create rain. The only difference is that you won't
need to curve the lines unless there is severe turbulence.
Figures 8.38 and 8.39 show a comparison of a real and a CG-rendered splash. Figure
8.39 shows a real splash shot on stage where all the detail of foamy airborne water in
motion can be seen. By studying these real water elements, and knowing we had to create
CG splashes that would be seen either side by side with the real splashes or in a neighboring
shot, we sought to match the look. Figure 8.38 shows the result of this work in a CG splash
element from a shot that was similar to the real splash reference. Next to crestMist, simulat-
ing and rendering these thick water splashes was the most difficult task on the project.
256 C H A P T E R 8 • Complex Particle Systems
Figure 8.37: A CG rendering of a water splash, showing the curved shapes of the water droplets and
the specular highlights along each drop
Figure 8.39: The real splash element dropped from a 2500-gallon tank on stage (from John Frazier's
special effects team)
You need to focus attention on detail in reference images and the overall impression of
the item you are trying to replicate. To replicate a natural phenomenon, you need the essential
ingredients, regardless of how impressionistic they are; you don't need every single detail. For
example, a totally photorealistic painting looks fake on film, whereas just the right amount of
subtle impressionistic representation of the colors in the painting will look real on film.
Creating rough seas is difficult because of the transitions and transformations. Water
goes from being a discrete liquid surface to foam, to a splash, to blown mist, and finally to a
disappearing vapor. Representing all these elements in a single automatic process is desirable
but difficult. Figure 8.40 shows the culmination of the efforts to create the crestMist element
and have it appear natural and full of life. You can see this element in motion by viewing the
m r l 2 6 b 0 2 4 _ c r e s t M i s t . m o v file on the CD-ROM.
Figure 8.40: A frame of the CG-rendered crestMist element from a final simulation
Figure 8.41: The shelf for The Perfect Storm, showing some of the buttons we used to add effects of
various kinds to the scene
A button can execute MEL commands, load plug-ins, and create connections between
nodes. You might be able to contain all the necessary commands to set up a scene from
scratch within the button, but an easier solution is to simply import a "generic" scene file
that you have previously created with everything you need already in it, make the necessary
connections, and allow the user to customize the settings for their shot. No doubt you will
end up with many buttons no matter how small or large your project. Its useful to give all
the custom show buttons a home on the same shelf, as we did for the Storm shelf in Figure
8.41. There were buttons to create oceans, buttons to browse through the 80 different
oceans, buttons to make foam, and buttons to create crestMist.
Many artists contributed to the show shelf, each being responsible for their own
unique, all-powerful button, and they took pride in making their button work efficiently and
consistently for the rest of the setup.
Figure 8.42: The original background plate showing the full-size replica of the Andrea Gail con-
trolled by a giant submerged gimbal system built by]ohn Frazier's special effects team
or plug-in can be distributed to a large crew with minimal support required. The easier it is
to use the setup and the clearer the documentation, the less support a large crew needs, and
the faster the work can be completed. It is always difficult and tedious to write good docu-
mentation, but each well-documented feature is one less question needing to be answered.
The larger the crew, the more this can add up and the more use the documentation will get.
And that's the real payoff for creating thorough documentation.
Final Result
After all the hard work, now you get to sip coffee while clicking the crestMist button and voila!
Well.. .maybe it's not quite that easy, but, hey, these aren't easy elements. It's amazing to be able
to replicate something complicated in nature, make it fully controllable, and at the same time
make it relatively easy to use. The crestMist element turned out to be difficult, and it wasn't
always the most obvious part of a shot, but it clearly made a difference in helping the audience
feel the power of the storm in every shot. Figure 8.44 shows a frame from a shot in the rogue
260 C H A P T E R 8 • Complex Particle Systems
Figure 8.43: A wipe showing the ocean wireframe and the rendered ocean with simulated foam and
crestMist blowing off the top of the wave
wave sequence of the film. The boat was the real replica of the Andrea Gail shot in the tank in
front of a blue screen, seen in Figure 8.42, and most of the water surrounding it was replaced by
the CG ocean and crestMist in Figure 8.43. You can see the subtlety of the mist but also its
importance in the final frame.
Summary
We have covered many aspects of the vast area of particle simulation, from gathering refer-
ence to preparing effects to be used by many people on a large scale project. Technology is
changing rapidly, and many new tools are available to help us beat Mother Nature and cap-
ture even the slightest glimpse of her spontaneous beauty. Particle simulation plays a big part
in creating effects while maintaining full control over them—essential in production. Direc-
tors are always going to want the latest never-before-seen effects, and effects animators are
going to find more creative ways to use complex particle systems to meet the director's
Summary 261
Figure 8.44: The final composite showing the finished shot with CG ocean extension, crestMist, sky,
and haze
vision. New fluid dynamic simulations are also going to be added to the mix of tools. Using
particle dynamics is very much a Zen art in which you have to learn how to work with the
forces and not fight them.
Many things happen by accident in dynamics, and the techniques covered in this chap-
ter will hopefully help you diagnose the bad accidents and keep the good ones.
Creating a Graphic User Interface
Animation Setup for Animators
Susanne Werth
For a thorough introduction to Maya, see Maya 4.5 Sawy, by John Kundert-
Gibbs and Peter Lee, from Sybex, Inc.
If you have an idea for a script, the best way to begin is to check out the MEL commu-
nity on the Internet for similar scripts. A good source is the MEL page of Highend3D
(www.highend3d.com). You might find scripts you can use right away or scripts that will suit
your needs with only modest adjustments.
The idea for the animation script in this chapter was inspired by a tutorial by Ron
Bublitz, who created a character picker for Maya 2.x. You can find the script at www.
highend3d.com/maya/tutorials/ron1/.
In a big production team that includes a lot of international people, I often use a picture-
based GUI, and this is the interface I am going to explain in this chapter. The basic idea is
that the animator can choose the handles of the character intuitively, by clicking an icon at
the exact body position in the picture as the handle or animation control on the model.
The first step is to render a picture of the character and print it or to sketch the charac-
ter if modeling is not yet complete. Make a list of all the handles you want to include in your
interface, and try to position them in your picture. You'll use this list later when you create
the pattern of the icon positions.
If you need to include a lot of handles, try to position them horizontally. That way you
keep the number of icons small. The simpler the pattern, the less work you have to do to
position the icons with the script.
Some animation controls, for example, hand and facial controls, are better represented
as sliders. You can include them in your character control GUI, but if you have a complex
character, I recommend creating an extra GUI window for those parts.
Here are some other questions to answer when planning your GUI:
• Does your character setup include several skeletons that will be animated?
• Must you include handles for a Mocap, Offset, IK, and/or FK skeleton?
• Do you want the animator to be able to choose between these options?
During the planning process, try to get as much input from the animators on
your team as you can. Talk about their experiences In other projects, which tools
they used, what they would like the tools to be able to do, and so on. The more
Information you collect before you start scripting, the less time you lose making
changes later.
In this chapter, I'll show you how to change between different skeletons while using only one
interface.
Last but not least, you must know if the GUI has to work for one or for several charac-
ters. In a big production, animators might need to control more than 10 characters. Creating
a different GUI for each is too much work. You can, however, modify the interface so that
the MEL script works with all 10 characters.
Distribute versions of the GUI to the animators for testing as soon as possible. They
are the ones who have to work with it, so try to meet their needs and get feedback
as soon as you can.
Naming Conventions
You can select a node in a hierarchy in different ways. You can address the node with the
complete path or by its unique name. Figures 9.1 and 9.2 show two character hierarchies of
the same skeleton structure.
If there is only one character in a scene, it doesn't matter which method you choose.
But if two characters share the same hierarchy structure in the skeleton, Maya will be con-
fused if you do not use the complete path method. Figure 9.1 shows two models that have
266 C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
figure 9.1: Two characters with the Figure 9.2: Two characters with the
same hierarchy structure and no same hierarchy structure and prefixes
prefix
the same hierarchy structure. If you try to select the node r o o t _ h _ M of model p u p p e t _ G _ l by
typing the following at the command line:
select " r o o t _ h _ M " ;
Maya responds with the following error message:
/ / E r r o r : l i n e 1 : M o r e t h a n one object m a t c h e s n a m e : root_h_M / /
You must use the following full path of the node:
s e l e c t "puppet_G_l p u p p e t S k e l r o o t _ h _ M " ;
As you can see, the script gets complicated and excessively long when using this tech-
nique. You can easily achieve the same result by using models like those shown in Figure 9.2.
Each model has a prefix that runs throughout the hierarchy. Therefore, each node has a
unique name, even if the skeleton structure is identical. To select the root handle of the first
character with the name A D _ p u p p e t _ G requires nothing more than the following.
select "AD_root_h_M":
With the prefix method, you can use the same skeleton to create different characters.
To distinguish the characters, run a prefix, rename the script over the hierarchy structure, or
while importing or referencing the character scene, follow these steps:
1. Choose File Import or Create Reference
Planning the GUI 267
2. In the Name Clash options, set the drop-down boxes to Resolve [all nodes] with [this
string].
3. In the With field, enter your prefix, and be sure not to put in the underscore.
4. Set the File Type to the appropriate format (Maya Binary or Maya ASCII).
5. Click the Import button or click the Reference button and select the scene file.
Maya imports or references the scene file and prefixes all the nodes without your having to
do anything further. An underscore is automatically added between the prefix and the origi-
nal node name.
The prefix can be a complete name or only a letter. I recommend positioning identification
tags at the beginning of a node, but you can proceed in several ways. Depending on your project
or database structure, you might want to place letters at the end of the node name instead.
A naming convention requires that all members of the team abide by the rules you
establish, if the GUI setup must work with more than one character, all characters
must be based on the same character setup and must have the same name in each
node, aside from naming prefixes.
You can use the MEL command t o k e n i z e to search for specific letters in a string. The
underscore in the name is used as separator to distinguish not only between characters but
also between different versions of the same character.
For example, you can give a low-resolution animation model the prefix _A_, and you can
give a high-resolution model the prefix _H_. Regardless of the method, you can have different
versions of a character in a scene and select a version with the GUI if your script can search for
this attribute. In the GUI itself, you create the optionMenu that includes the choices.
Figure 9.3: Defining the size of the icons Figure 9.4: Naming the icons
/ / d e c l a r e l a y o u t s , i c o n s , b u t t o n s , etc
showWindow windowName;
This code creates the window when the script is called, but Maya doesn't delete the
window if you don't tell it to do so. By using the following if command before the window
is created, you can ensure that you always have only one window of the same script running
if the window with the name windowName exists (- ex):
if ( ' w i n d o w -q -ex windowName'==true) deleteUI windowName;//queries (-q)
Creating the Window 269
When the if command returns true, the d e l e t e U I command deletes the window before
creating a new one in the following line. The actual code in the sample script at the end of
the main procedure is:
if ( ' w i n d o w -q -ex CharWindow' == true) deleteUI C h a r W i n d o w ;
w i n d o w - w 500 - h 500 - t " C h a r a c t e r GUI" C h a r W i n d o w ;
and
showWindow CharWindow;
At the end of the script, you call the main procedure, in which the window is created.
By calling the main procedure inside the script, you don't have to type that procedure
name every time after sourcing it. Especially during the debugging process, saving these key-
strokes can be helpful. Another possibility is to create a MEL shelf button that sources the
script and calls this procedure in one pass.
Declare every layout with a unique name so that you can easily address a layout later. This is
important if you create other windows that act on data from your GUI, either by query or edit.
Second, declare all the children layouts you want to use, before you edit the formLay-
out, as shown in the following:
1. s t r i n g $ t a b s = ' t a b L a y o u t - i m w 5 - i m h 5 b a s e T L ' ;
2. setParent $form;
3.
4. s t r i n g $ h e a d = ' r o w C o l u m n L a y o u t -nc 1 -cw 1 300 o p t i o n R C L ' ;
5. setParent $form;
6.
7. string $bottom='rowLayout -nc 4 buttonRL';
8.
9. formLayout -edit
10. -attachForm $tabs "top" 30
11. -attachForm $tabs "left" 0
12. -attachForm $tabs "bottom" 30
13. -attachForm $tabs "right" 0
continues
270 C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
You can think of a GUI layout structure as a hierarchical structure. On top, you have a
formLayout, in the example named with the string $ f o r m . At the next level, you declare the
tabLayout baseTL, the rowColumnLayout optionRCL, and the rowLayout buttonRL as
children of the formLayout baseFL side by side. When you declare the children, you must
know the level to which they belong. In lines 2 and 5, you specify the level by telling MEL
which parent the child layout has. In the example, you set $ f o r m as parent. Another possibil-
ity is to jump up one level in the hierarchy by typing the following:
setParent ..;
The formLayout type of layout will not work on Its own to create a layout for
your window. Likewise, it is extremely important to let MEL know that the sub-
layouts (such as rowColumnLayout) have the formLayout as their parent (using
the s e t P a r e n t command or the parent flag of the child control/layout), lf not,
the script generates an error when It attempts to build the window. Because
MEL is fussy about exactly how layouts are related, you might want to look at
the MEL reference for more on parenting layouts.
This list will contain all characters and set elements. If you distinguish the character with a
prefix or postfix, you can tokenize each element of the list; if you get the results you search
for, save it inside the $ c h a r L i s t that is displayed in the optionMenu.
The following code shows the use of the object of type character.
// --optionMenu list of all characters in the scene--
The variable $ c h a r in the for in loop does the same: it takes on the value of the first
array item, which is then shown in the GUI as a menu item with the name pulled from
$ c h a r L i s t using the menuItem call, and it runs decreasingly until the end of the array. The
f o r in loop is preferable to the second example because it runs through the list more ele-
gantly. Line 3 of the previous code declares the optionMenu by specifying the width and
label text. You can declare the command flag ( - c c ) here, executing the command specified as
a string, as soon as an menuItem of the optionMenu is selected. See the script on the CD for
an example.
The menuItem itself is the text shown as an option in the selection list. Let's say you
have a scene with the characters "Louis_A_v3" and "Fred_R_vl". When the script runs
through line 2, the array $ c h a r L i s t contains the following elements:
$ c h a r L i s t = { " L o u i s _ A _ v 3 " , "Fred_R_vl"};
In the first round through the loop, beginning in line 4, the variable $ c h a r has the value
"Louis_A_v3". When the first round of the loop is over, the value of $ c h a r changes to the
next array element, which is "Fred_R_vl", enters the loop in round 2, and creates a second
menuItem and list element with the label "Fred_R_vl". After that, the loop terminates
because the array S c h a r L i s t has no more elements. The loop runs as many rounds as ele-
ments exist in the array $ c h a r L i s t .
272 C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
Check Boxes
One way to select animation controls is to picture the skeleton with check boxes. The form-
Layout gives you the necessary control over positioning these elements.
Figures 9.5 and 9.6 show examples of interfaces that are realized with check boxes. In
Figure 9.6, an embedded image helps identify the handles. You can use any kind of image—a
drawing, a reproduction of the joints, and so on. Just use your imagination.
For each check box, you must declare a variable of type string, as in the following code.
1. s t r i n g $ a 3 = ' c h e c k B o x
2. -w 12 -h 12 - v i s 1
3. - a n n " h e a d"
4. -onc "getonc(\"head_h_M\")"
5. -ofc "getofc(\"head_h_M\")"
6. head_h_M';
Figure 9.5: A sample check box layout without a Figure 9.6: A sample check box layout with a
background picture background picture
Creating the Window 273
Line 2 specifies the size of the check box. If you don't want an extra border around the
check box, use a size of 12 x 12 pixels. Otherwise, increase the values of the width ( - w ) and
height ( - h ) flags. Even if the visibility is enabled by default, you must establish a visibility
flag. Because you position the fk and ik handles on top of each other, you switch visibility
on and off depending on the f k _ i k attribute value.
The annotation in line 3, defined by the flag - a n n , is a little pop-up text box, with the
text in quotation marks. This text is " h e a d " for the handle "head_h_M". An annotation such
as this helps the animator select the correct control, especially if the controls are close
together. In addition, annotations help you debug mistakes in the declaration of the icon
positions, because the icons won't end up in the correct position if there's a mistake. Some-
times they end up on top of each other in the upper left corner. Using annotations, you can
quickly analyze the icon name by moving the mouse curser over the misplaced icon.
Lines 4 and 5 declare commands that will be called when the check box is turned on
( - o n c ) or off ( - o f c ) . You can either enclose all the commands in quotation marks separated
by semicolons, or you can call a procedure that contains these commands.
Line 6 contains the name of the check box. No matter which icon, button, or check
box type you use, be sure to name each one. When you run the script, the icon and button
attributes are changed by some procedures ; therefore, you need the unique names of these
elements. If you don't create meaningful names, Maya creates names such as c h e c k b o x l ,
c h e c k b o x 2 , c h e c k b o x 3 , and so on. Maya counts serially numbered from the start of Maya
and not from the pop-up of each window, which means that you can't count the number of
check boxes in a window and draw a conclusion about the number of a check box. What
you're looking at might not be the first window created. To avoid confusion, create useful
names from the beginning.
I'll discuss the setup of the procedures g e t o n c and g e t o f c later in this chapter.
Radio Buttons
If you decide to use radio buttons, don't forget that a radio button toggles states automati-
cally. Thus, you can't select more than one radio button at a time. MEL uses the flag - a d d
with the s e l e c t command. The check boxes allow multiselection. If you toggle the selection,
only one element at a time can be selected, and the previous selected element of the active list
is replaced by the currently selected element. Only one radio button of the radio button
group can be selected. Figure 9.7 shows an example radio button layout.
You might want to use radio buttons if you want to let the animator choose between
the additive and the toggle selection method. To provide your animator(s) with both in the
same interface, create a control to change between check boxes and radio buttons. Using an
u p d a t e procedure, you provide the interface once with icons or check boxes. If the animator
selects the toggle method, you display the radio button version. Of course, you can also pro-
vide a toggle selection with icons and check boxes, but by providing radio buttons you
underline the method visually.
274 C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
On the CD-ROM, you'll find the radio button layout shown in Figure 9.7 as
c h a r G U I _ r a d i o . m e l . lt works with the w o o d e n D o l l . m b file to select handles one
at a time. Selection is the only functionality of this script. You can use it as a
start and then embed more functionality.
Icons
The following icon types are available in Maya:
iconTextStaticLabel A static label that is not useful for our purposes, because it can't
be connected to a command.
iconTextRadioButton An icon with text and/or images that behaves like a radio but-
ton. If you mix different icon types and want to use the radio button behavior for a
control that should be turned off if the state of another icon is turned on, consider that
the pressed state of the radio button can only be toggled by another icon or button
with radio button behavior.
iconTextButton An icon with text and/or images; a command can be executed when
the control is clicked or double-clicked. The visual state of the control doesn't change
to reveal its state unless you program an i m a g e e x c h a n g e procedure that is executed
when the iconTextButton is clicked.
iconTextCheckBox An icon with text and/or images; a command can be executed
when the iconTextCheckBox is turned
on and another when it's turned off.
The state on or off is visually empha-
sized and can be increased by an i m a g e
e x c h a n g e procedure.
symbolButton A button with an image
and a command that can be executed
when the symbolButton is clicked.
symbolCheckBox A symbolButton
that behaves like a check box. The
advantage of the symbolCheckBox is
that you can specify an image and/or
command for the status on and off,
enabled and disabled.
Figures 9.8 and 9.9 show the Maya
icon types.
Theoretically, you can use all kinds of
icons or symbol buttons, except the icon-
TextStaticLabel. Only the look and function-
ality you want to create is important as you
decide which call(s) to use. Using specific icon
or button types for certain functionalities
makes the functionalities easier to recognize. Figure 9.7: A sample radio button layout
Creating the Window 275
Figure 9.8: The Maya icon types in their Figure 9.9: The Maya icon types in their pressed
unpressed state state
In this chapter's script, I use the iconTextCheckBox. The following code shows an
example of the declaration of each icon's functionality.
1. s t r i n g $ i m a g e P a t h = "F:/Sybex_Maya_Buch/picis/";
2. s t r i n g $b10a = 'iconTextCheckBox
3. -st " i c o n O n l y "
4. - i l ($imagePath+"b10.bmp")
5. -w 13 -h 12 - v i s 1
6. - a n n "IK_leftArniPoleVector"
7. -onc "getonc(\"ikPoleArm_h_L\")"
8. -ofc "getofc(\"ikPo1eArm_h_L\")"
9. ikPoleArm_h_L';
In line 2, we attach the top side of the icon $ b 2 5 pixels from the top side of the form-
Layout. In line 4, we attach the bottom side of the icon $ b 2 5 pixels from the top side of the
icon $b3. The a t t a c h P o s i t i o n option gives you the option of attaching the icon according to
the number of divisions you specified across the form. If you choose 100 divisions, position
50 is 50 percent.
The use of division numbers can also provide absolute positioning if you are
using a square picture. To achieve this, choose the same number of divisions as
the width and height of the picture in pixels.
Relative Positioning
Relative positioning is great if you want two icons or controls to stay side by side or in a spe-
cific order even when the window is resized. The following flags
- a t t a c h C o n t r o l [iconname] " s i d e n a m e " offset [iconname]
- a t t a c h O p p o s i t e C o n t r o l [iconname] " s i d e n a m e " o f f s e t [iconname]
- a t t a c h P o s i t i o n [iconname] " s i d e n a m e " o f f s e t
When you use relative positioning, the position of icon $b is defined by the position of
icon $a and/or $c. The offset specifies the distance of $b from $a and $c. The flag - a t t a c h -
C o n t r o l aligns the left side of icon $b to the right side of icon $a with an offset of n pixels.
The flag - a t t a c h O p p o s i t e C o n t r o l aligns the right side of $b to the furthest side of icon $c
with a distance of n pixels.
If you position the icons in a relative way, you must cut up the background image com-
pletely. Depending on the number of handle controls in the model, this could be a lot of work.
Creating the Window 277
Sometimes it's tedious to arrange the layout icons until they are all in place. It would be nice
to be able to create absolute placement and reduce the number of icons that have to be saved,
because only the position of the icons with functionality is needed.
if you create a s p a c e . bmp file that has a size of1 to 1 pixel, you don't have to cut
and save the icons without functionality. With the information about the extent of
the area, you can use s p a c e . b m p as an image file instead, because these icons won't
be shown anyway (the v i s i b i l i t y flag is set to 0).
Absolute Positioning
As I mentioned, you can attach by position in an absolute way if the image is
square, in combination with the declaration of a division number equal to the size
of the image. But what can you do if the image isn't square, as is the case with our
sample script?
Fortunately, you can use the a t t a c h F o r m flag to create absolute positioning
anyway. The idea is that you can position an icon by its corners. Because you can
give an absolute position with the a t t a c h F o r m flag for all sides of the icon, you
define the position of the corners. Figure 9.10: The direction
If you use a graphics program such as Photoshop to cut the image into icons, of the measure of the pixels
you might know that the width and height of an image is counted from the left top in an upside-down coordi-
nation system, such as is
corner. The formLayout positioning works in the same way. (See Figure 9.10.)
used in Maya and other
If you want to specify the position of the top left corner of an icon, you iden- graphic programs such as
tify the position of the corner's left and top coordinates. Let's say we have icon $a3 Photoshop
in our sample script. When we cut out the icon of the rendered picture, we note the
position information for one corner. We need only one corner to specify the posi-
tion in the formLayout.
In your Info tool, be sure to set the units to pixels before you use it.
Positioning in the GUI works like an upside-down coordinate system. The values you
get for the top left corner of icon $ a 3 are x = 82 and y = 20, which means 82 pixels from the
left side of the formLayout and 20 pixels from the top:
-attachForm $a3 " t o p " 20
-attachForm $a3 "left" 82
-attachNone $ a 3 " right"
-attachNone $ a 3 "bottom"
lf you have difficulties imagining which side corresponds to which corner, think of
the sides of the control as straight lines. At the points where they cut each other, yc
define the corners of the control. The two straight lines that create this point are the
sides you have to attach for this corner.
Procedures
Until now we have been concerned with creating the look of the GUI. We developed button:
icons, sliders, and so on. However, the interface doesn't do much yet because we haven't tol
it to run any commands when the interface elements are manipulated. You specify the func-
tion of the interface with the following procedures, which are called when an icon, a button
a check box, or a radio button is clicked.
When the user selects a character of the optionMenu, the command flag - c c calls the
procedure u p C h a r G U I ( ), to update the interface according to the selection. The first value we
need is the prefix of the selected character. We query it on the optionMenu and save the
result in the variable $prefix of type string:
This simple query code works only because our sample file w o o d e n D o l 1 . m b contains a
character object with the prefix AD as name and a strict naming convention in the file. If you
don't search for character objects, you must parse the result of the 1 s command executed
Procedures 279
earlier in the declaration of the optionMenu, to find out the prefixes of the characters before
you save them into the array S c h a r L i s t .
You can also parse inside the u p C h a r G U I procedure. For example, you don'twant to
present a cryptic declaration in the selection list of the optionMenu, such as
AD_H_vl.8_250101. Instead, you program the display of more meaningful text, such as
HighRes Animation Doll version 1.8. In such a case, the text labels don't match the prefix
identification of our model naming convention. Second, AD_H_vl.8_250101 is displayed in
the selection list, because it's the name of the top node of your character or character object.
But in the naming convention of your model, only the prefix AD_H follows down the hierar-
chy tree. In this case, you need to parse again using the t o k e n i z e command.
Step two is to find out which animation mode is selected. In our example model, we
provide only the arms, with a choice for an IK or an FK animation mode. The more possibil-
ities your model offers—for example, a choice as well for the legs, or the possibility to switch
Mocap on or off—the more attributes you need to read out in the update procedure.
In our wooden doll scene ( w o o d e n D o l 1 . m b on the CD-ROM), the switch attribute is
included with the rotation handles of the hands. The attribute itself shows the driven key val-
ues 0 and 1 in the Channel box, but you can change the driven key to a higher value, such as
10, and blend between IK and FK. The value 1 is for FK, and 0 is for IK. The idea behind the
query is that you can read the value of the i k _ f k attributes with one look at the check boxes
in the GUI and change them at this location if needed. At the same time, we are going to update
the check box states every time an animator changes the values by using the Channel box.
But first we have to edit the check box behavior according to the selected character.
You can't edit the commands by turning the check boxes on or off in the main procedure,
because you need to know the prefix for identification. And with each change of the charac-
ter selection, the check boxes must be updated.
1. checkBoxGrp -e
2. -onl ("checkBoxGrp -e -v2 false t o g g l e L ;
3. setAttr "+$prefix+"_hand_h_L.ik_fk_L 0")
4. -on2 ("checkBoxGrp -e -vl false toggleL;
5. setAttr "+$prefix+"_hand_h_L.ik_fk_L 1")
6. toggleL;
7.
8. checkBoxGrp -e
9. -onl ("checkBoxGrp -e - v 2 false t o g g l e R ;
10. setAttr "+$prefix+"_hand_h_R.ik_fk_R 0")
11. -on2 ("checkBoxGrp -e - v l false t o g g l e R ;
12. setAttr " + $prefix+"_hand_h_R.ik_fk_R 1")
13. toggleR;
Lines 1 and 6 specify which c h e c k B o x G r p to modify. The next step is to give the c h e c k -
B o x G r p a radio button behavior and update the attribute value according to the selection.
The c h e c k B o x G r p provides detailed specification for the execution of on and off state com-
mands for each c h e c k B o x inside the group. In the previous code, the on state command is
chosen. By setting the value 2 to false, we turn off the second check box in the c h e c k B o x G r p ,
when the first check box, which represents the IK value, has been turned on ( - o n l ) . At the
same time, the i k _ f k attribute is set to 0, which means IK. With the second check box, we
proceed vice versa. In lines 8 to 13, we edit the right arm of the model in the same way.
280 C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
lf you prefer the look of radio buttons,this situation Is ideal for them.You don't
the command to provide radio button behavior, and you save some lines of code.
After the editing, we want to read out the i k _ f k attribute values of the model and
update the check box accordingly.
int $ i k v a l _ L = 'getAttr ($prefix + " _ h a n d _ h _ L . i k _ f k _ L " ) ' ;
int $ i k v a l _ R = ' g e t A t t r ($prefix + " _ h a n d _ h _ R . i k _ f k _ R ' T ;
The next step is to describe a conditional action depending on the test conditions of
saved attribute values, as shown in the following:
1. if ( $ i k v a l _ L == 0)
2. I
3. checkBoxGrp -e - v l true t o g g l e L ;
4. checkBoxGrp -e -v2 false toggleL;
5. I
6. else
7. {
8. checkBoxGrp -e -vl f a l s e toggleL;
9. checkBoxGrp -e - v 2 true toggleL;
10. )
ScriptJobs can slow down your system, if they aren't killed through a parent Ul
object or by their job number, they can run forever.
With the flag - p / p a r e n t , we tell the scriptJob to terminate when the UI element C h a r
W i n d o w is deleted. This ensures that no scriptJob continues when you close the Character
GUI. With the flag - a c / a t t r i b u t e , we tell the scriptJob to observe the value of the i k _ f k
attributes of the selected character. If this condition comes true, the procedure u p C h e c k s is
executed, and the check boxes are updated to the new value.
Because the s c r i p t J o b is called in the up C h a r G U I procedure, every time you change to
another character, which calls u p C h a r G U I , you create a s c r i p t J o b according to the selected
Procedures 281
character. But the old s c r i p t J o b of the previous selected character isn't terminated and con-
sumes system capacity until the C h a r W i n d o w is deleted. Imagine five or more characters in a
scene: as you change animating between them, you can easily produce 10 or more scriptJobs
running at the same time. But only the two scriptJobs of the currently selected character are
useful. All others just slow down or even crash your system.
Therefore, terminating all running scriptJobs before you create two new ones for each
new character assures a safe workflow. When the user selects a new character, the following
line kills all scriptJobs:
scriptJob - k a ;
First, you declare the names of the IK and FK controls in a string array. Our example
model provides two specific controls for IK and FK at the arms. But in your production, you
might find other possibilities for legs, tails, and so on. Create an array for each part that can
change between IK and FK states. In our example, we decreased the number of arrays by 50
percent, because we have a good and strict naming convention. This allows us to forget
about declaring separate arrays for the left and right side.
int $ i k v a l _ L = ' g e t A t t r ( $ p r e f l x + " _ h a n d _ h _ _ L . i k _ f k _ L " ) ' ;
int $ i k v a l _ R = 'getAttr ($prefix + " _ h a n d _ h _ R . i k _ f k _ R " ) ' ;
Second, we get the attribute value of the i k _ f k attributes and save the integer value in
new variables. Another possibility is to query the state of the IK and FK check boxes, which
might look like the following:
int $ s t a t e = ' c h e c k B o x G r p - q - v 2 t o g g l e L ' ;
If you look at the command descriptions of the c h e c k B o x G r p command, you see that
the queried flag - v 2 (value2) is of type Boolean, which means true or false. But we save the
value as an integer. This works because the Boolean value true can also be described as inte-
ger 1, and false as 0. So if the check box is turned on, we get the response 1, and if it is
turned off, we get 0 as the result. You can't declare a variable of type Boolean, because MEL
considers a Boolean value a constant variable of type integer with the keywords true, on, yes
for 1 and false, and off and no for 0.
The next step is to change the visibility according to the value of the attribute. If the
variable $ i k v a l _ L has the value 0, IK is selected. In this case, we want the IK controls of the
282 C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
left arm to be shown and the FK controls of the left arm to be hidden. The following code
shows the necessary conditional actions and f o r loops to provoke this.
1. if ( $ i k v a l _ L == 0)
2. (
3. f o r ( $ i = 0 ; $ i < s i z e ( $ i k a r m c o n t r o l s ) ; $i++)
4. (
5. iconTextCheckBox -e - v i s 1 ($ikarmcontrols[$i] + "_h_L");
6. iconTextCheckBox -e - v i s 0 ($fkarmcontrols[$i] + "_h_L");
7. I
8. I
9. else
10. {
11. for ( $ i = 0 ; $ i < s i z e ( $ i k a r m c o n t r o l s ) ; $i++)
12. (
13. iconTextCheckBox -e - v i s 0 ($ikarmcontrols[$i] + "_h_L");
14. iconTextCheckBox -e - v i s 1 ($fkarmcontrols[$i] + "_h_L");
15. }
16. )
17. if ( $ i k v a l _ R == 0)
18. {
19. for ( $ i = 0 ; $ i < s i z e ( $ i k a r m c o n t r o l s ) ; $i++)
20. (
21. iconTextCheckBox -e - v i s 1 ($ikarnicontrols[$i] + "_h_R");
22. iconTextCheckBox -e - v i s 0 ($fkarmcontrois[$i] + "_h_R");
23. )
24. I
25. else
26. (
27. f o r ( $ i = 0 ; $ i < s i z e ( $ i k a r m c o n t r o l s ) ; $i++)
28. {
29. iconTextCheckBox -e - v i s 0 ($ikarmcontrols[$i] + "_h_R");
30. iconTextCheckBox -e - v i s 1 ($fkarmcontrols[$i] + "_h_R");
31. )
32. 1
which means:
iconTextCheckBox -e -vis 1 ("ikPoleArm" + "_h_L");
which again means:
iconTextCheckBox -e - v i s 1 "ikPoleArm_h_L";
Now we increment the index $ i , which means $i = $i +1. Then we start to test the
condition again: 1 < 2 is true. The statement changes to:
i c o n T e x t C h e c k B o x -e - v i s 1 ( $ i k a r m c o n t r o l s [ 1 ] + " _ h _ L " ) ;
which means:
i c o n T e x t C h e c k B o x -e - v i s 1 " i k a r m _ h _ L " ;
We increment again and test 2<2, which returns false. Here the loop terminates, and
the script continues in line 7. However, no more statements have to be executed in the if con-
dition, so the next step is to run the e l s e condition. Because the test condition for $ i k v a l _ L
has been 0, the e l s e statements are executed if $ i k v a 1 _ L != 0 is true, which means every
value except 0. If your attribute defines the minimum as 0 and the maximum as 1 of type
integer, no mistake can be made. But if you have higher values for the maximum or have
defined the attribute as a float, you can blend between the two states of the attribute. If you
decide to use a float, script an e l s e if condition to specify the exact value when the e l s e
statements should execute. For example if you want to blend between 0 and 1, you might
want to make the switch at 0.5. In that, case the else if condition looks like the following:
else i f ( $ v a r i a b l e > = 0 . 5 )
The if condition looks like this:
if ( $ v a r i a b l e <= 0 . 5 )
Lines 17 through 32 proceed in the same manner for the character's right arm.
Switching between IK and FK icons prevents the animator from incorrect selection, which
can occur when the animator selects the handles directly in the scene. To make sure this can't hap-
pen, modify the visibility of the animation control handles. One way to do so is to include this
modification in the script, right after the visibility state of the icons is changed. You can do this by
changing the visibility attribute of the handles or, if you combine the controls in layers, by switch-
ingthese layers on or off. Use the s e t A t t r command to achieve visibility changes.
setAttr "layerl.visibility" 0;
You can divide a string into pieces using the t o k e n i z e command. First, you divide by
the symbol".".
tokenize $imagepath " . " $nodot;
F : / S y b e x _ M a y a _ B u c h / p i c i s / d 4 and bmp.
The t o k e n i z e command returns an integer of the resulting parts, in this case 2. The
parts are saved into an array specified at the end of the t o k e n i z e command. The next step is
to divide the first part of the array by the symbol "/". The result is four pieces, of which only
the last is interesting to us.
int $ n o s l a s h n u m = 'tokenize $nodot[0] "/" $ n o s l a s h ' ;
Now you can edit the i c o n T e x t C h e c k B o x image by adding the postfix _t. Last but not
least, select the selected item with the passed-on $controlname variable.
s e l e c t - a d d ($prefix + "_" + $controlname);
To uncheck an icon, you use the same procedure, but tokenize one more step, by
checking the last array item by the symbol "_".
t o k e n i z e $ n o s l a s h [ $ n o s l a s h n u m - 1 ] "_" $buffer;
If the resulting array matches "t" at the second position, you have proof that the previ-
ous image was of state on. If the command is embedded in the g e t o f c procedure that is
Procedures 285
executed only when the i c o n T e x t C h e c k B o x is turned off, you don't need this proof. To again
display the unpressed icon image, edit the image path of the i c o n T e x t C h e c k B o x as follows:
iconTextCheckBox -e
- i l ("F:/Sybex_Maya_Buch/picis/" +$buffer[0] +".bmp")
-v off
$controlname;
The flag - v / v a l u e is a special feature of the i c o n T e x t C h e c k B o x . It's a visualization of
the pressed state, which looks as if the border is inlaid rather than raised — a fairly standard
way of indicating that an item is pressed in a GUI interface.
When you try out the scripts on the CD-ROM, remember that you must adjust
the paths that lead to Images,if you don't, you'll get error messages and won't
be able to see the images.
Then, again you distinguish the conditional actions according to the i k _ f k values of the
hand attributes. Inside the f o r loops, we call the procedure g e t o n c , which selects the items
from the arrays, as shown in the following:
int $ i k v a l _ L = ' g e t A t t r ($prefix + " _ h a n d _ h _ L . i k _ f k _ L " ) ' ;
int $ i k v a l _ R = ' g e t A t t r ($prefix + " _ h a n d _ h _ R . i k _ f k _ R " )';
if ( $ i k v a l _ L == 0)
(
for ( $ i = 0 ; $ i < s i z e ( $ i k a r m c o n t r o l s ) ; $i++)
I
getonc($ikarmcontrols[$i]+"_h_L");
)
}
continues
286 C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
else
for ( $ i = 0 ; $ i < s i z e ( $ f k a r m c o n t r o l s ) ; $i++)
(
getonc($fkarmcontrols[$i] + "_h_L");
)
if ($ikval_R == 0)
{{
f o r ( $ i = 0 ; $ i < s i z e ( $ i k a r m c o n t r o l s ) ; $i++)
(
getonc($ikarmcontrols[$i] + "_h_R");
)
}
else
for ( $ i = 0 ; $ i < s i z e ( $ f k a r m c o n t r o l s ) ; $i++)
{
getonc($fkarmcontrols[$i] + "_h_R");
}
but this command, which is also executed when you click nothing in the scene, makes it
impossible to deselect only one character while another character is still selected.
For example, imagine two characters AD and WD. Both characters should make a par-
allel movement of the right arm. Let's say my first choice is the character AD. I decide to use
the IK controls of the arm and therefore select the A D _ i k a r m _ h _ R handle. I want to set keys
for both characters at the same time. In the optionMenu, I choose the character WD. The
GUI updates itself, but the previous selection is still active. Now I accidentally select the
wrong handle. If I click the Deselect All button, only the handles of the character WD are
deselected. In the case of the s e l e c t - c l command, all handles are deselected, leading to
wasted time and frustration.
Maya provides a quick and easy way to achieve the default effect, so the Deselect All
button of a GUI should offer something more than that.
animator can select icons with the GUI, animate directly in the scene, selecting NURBS
handles, and then jump back to the use of the GUI icons without a loss of information about
the state of the character.
To achieve the permanent information flow you create another scriptJob. This scriptJob
is very time and capacity consuming, because it runs whenever the selection changes. You
place it with the other scriptJobs inside the u p C h a r G U I procedure.
scriptJob -p C h a r W i n d o w -e "SelectionChanged" "selicon";
The e x e c u t e d command is a user-defined procedure. The idea is that the icons of the
controls selected by the animator are displayed in the pressed state in the GUI. In the proce-
dure, you first read out all selected dag nodes of the active list.
$dagnodes = 'selectedNodes -dagObjects';
Be aware that character elements as well as every object on the active list are returned.
The command gives back full path strings of the items.
Next, you must check the size of the d a g n o d e s array. If the size returns 0, no elements
are in the active list. This occurs when the s e l e c t -cl command has been executed. The
scriptJob reacts to all changes in the selection, not just additions.
If the active list is empty, all icons of the GUI should be in unpressed mode. Therefore,
the u n t i c k a l l procedure is called. You can't use the g e t o f c or d e s e l a l l procedure, even
though either procedure changes the icon states, because these procedures also include
s e l e c t commands. If they are executed, the result can be an unterminated loop, and the sys-
tem crashes. To avoid that, you create a new procedure, called u n t i c k a 1 l , to change only the
icon states.
If the active list isn't empty, run some t o k e n i z e commands to separate the control
names from the hierarchy path. Divide first by " ! " and then analyze the last item of the
array. Because of our naming convention, all animation control handles have the middle let-
ter _h_ for handle. With the second t o k e n i z e command, you search for this item.
for ( $ i = 0 ; $ i < s i z e ( $ d a g n o d e s ) ; $i++)
(
int $buffernum1 = 'tokenize $ d a g n o d e s [ $ i ] " " $buffer1';
int $ b u f f e r n u m 2 = ' t o k e n i z e $ b u f f e r 1 [ $ b u f f e r n u m 1 - 1 ] "_" $ b u f f e r 2 ' ;
With another conditional action, you ensure that only the handles in the interface that
really belong to the currently selected character are updated in the GUI. In the naming con-
vention, we made sure that the h tag was at the second position from the right. Everything on
the left of this tag could be character information such as prefix, version type, and so on. All
this information should be part of the prefix, but might include the underscore. Because you
divide the string by this symbol, you have to put what is on the left of the letter h together
again and compare it with the prefix of the selected character.
string $ p r e ="" ;
for ($j=0; $j<$buffernum2-3; $j++)
I
{
$pre += $buffer2[$j];
if ($j<$buffernum2-4) $pre += "_";
I
if ($pre != Sprefix) u n t i c k a l l ;
288 C H A P T E R 9 • Creating a Graphic User Interface Animation Setup for Animators
If the test condition is false, you prove that the item in the active list belongs to the
selected character. Next, you query the path of the icon image of the animation control if the
second test condition
else if ( $ b u f f e r 2 [ $ b u f f e r n u m 2 - 2 ] == " h " )
becomes true. To do that, you first put the control name together again:
s t r i n g $ c o n t r o l n a m e = ( $ b u f f e r 2 [ $ b u f f e r n u m 2 - 3 ] +"_"
+ $buffer2[$buffernum2-2] +"_"
+ $buffer2[lbuffernum2-l]);
The g e t h P a t h procedure mentioned earlier is slightly different from all the procedures
presented so far. The string S i m a g e P a t h shows that the procedure returns a value like a
queried command. If you want a procedure to return a string, float, or int, you must specify
this type at the start of the declaration:
g l o b a l proc s t r i n g g e t P a t h ( s t r i n g $ i m a g e P a t h )
The word s t r i n g after the word p r o c indicates a return value of type string for this
procedure. Between the brackets, proceed as usual. At the end, specify the value you want to
return by typing:
return limagePath;
As soon as the script passes a return command, it jumps back to the line it has been
called and takes the value into the source procedure. Therefore, the return command should
always be at the end of a return procedure because any commands after a return command
won't be executed.
Warning A warning is purple text that is displayed in the command feedback line.
When you provide a warning in your GUI, use the following syntax:
warning "text you want to be showndisplay";
Error Message An error message is red text that is displayed in the command feedback
line. When you provide a warning in your GUI, use the following syntax:
error "text you want to be showndisplay";
Alert An alert is a dialog box that contains text and/or a button selection. You use
alerts to force the user to confirm a certain action. A popular example of an alert is a
dialog box that asks if you really want to quit a program.
Annotation An annotation is a little pop-up text field that explains the function of a
button, an icon, or some other control. The flag is available for nearly all control types
and has the following form:
- a n n / a n n o t a t i o n "text that you want to be showndisplay"
In addition to the tool we created in this chapter, you might want to develop a control for
animating the fingers and a control for facial animation. For both face and hands, I recommend
implementing sliders. Using the a t t r F i e l S l i d e r G r p command creates an embedded update
between Channel box and slider interface values with the flag - a t t r i b u t e . Here's an example.
/ / p o s s i b l e d e c l a r a t o n i n s i d e the m a i n p r o c e d u r e
a t t r F i e l d S l i d e r G r p - m i n 0 - m a x 1 -s 1 - p r e 0 - c a t 1 " l e f t " 1
- c a t 2 " l e f t " 1 - c a t 3 " l e f t " 1 - c w 1 80
-cw 2 50 -cw 3 10
-cc " M o K e y c h a n g e ( \ " l e f t A r m _ M O C A P \ " , \ " M o C a p a r m l \ " ) "
-1 "left a r m " MoCaparml'
You can help your animators speed up their animation by creating a database of facial
expressions and/or hand positions. Place them on a shelf with images of various states. Add a
special feature that saves additional expressions or positions that the animator creates. If an
animator creates poses they want to reuse, they can record the pose, and it is included into the
database. This database is open to all animators. With a database of character-specific expres-
sions or poses, you can ensure the consistency of animations for a single character. At the
same time, animation output per week will increase.
Summary
This chapter showed you how to plan, prepare, and program an animation GUI. I described
various approaches and showed you how to avoid certain problems. Using this overview,
you can now play with layout elements to develop GUIs that suit your own personal style. Be
aware that the solutions I described present only one way, but you can achieve something in
several ways. Use your imagination to find your own.
Effective Natural Lighting
Frank Ritlop
at a major studio, at a
two-person production company, or on your own personal project, you
likely have a set deadline for completing your assignment, and it's usually
tight, lf photo-realism is the rendering goal, you need to work both effi-
ciently and intelligently to finish the project on time, you may have access
to a thousand-processor render farm, but chances are you need to share it
with a dozen others, so computer speed is usually also an issue. This chap-
ter covers methods for getting photo-realistic results while keeping in mind
the need for short production and render time. Specific steps interspersed
throughout the chapter will help clarify the theory along the way.
As in any field, it takes time and practice to acquire the lighting skills
to create photo-real virtual environments. It's not rocket science, but get-
tinggreat results requires a thorough knowledge of lighting basics; so that's
what we'll begin with. Even if you've been doing lighting for a while, going
over these basic principles will be a good refresher and will ensure that you
understand Maya's terminology.
Lighting Design
Before you roll up your sleeves and start the actual hands-on work of lighting, you must
establish what your scene will look like. This generally happens in a meeting with the client
or the director, either of whom may already have a mental picture of the scene. If you're
working on your own project, you'll likely have a clearer idea of what you want to achieve.
294 C H A P T E R 10 • Effective Natural Lighting
Illustrations provided by the client or director can save you days of treading down the wrong
path. If you have only verbal descriptions and hand gestures, you'll need to start interpret-
ing, and that can waste valuable time.
Lighting design is the process of creating the look of a shot that you can use as a guide-
line for other shots in a scene to achieve consistency. If you're working on a single image,
you'll still need to go through the process, working closely with the client or director to come
up with what they want. This usually involves establishing lighting for the set first. This task
can take anywhere from a few hours to a few days, depending on the complexity of the
scene. Keep your setup as simple as possible. The fewer lights you use, the more manageable
the shot, not to mention the faster it will be to render. Using fewer lights also lets you get
quicker feedback, which is essential in a fast-paced project.
As a starting point, you might want to use reference material such as photographs to
generate ideas. Photographs show how light bounces, diffuses, and reacts in different situa-
tions. If you've never touched lighting before, a trip to the theater can be useful. You can see
how the actual instruments are used to create light. Movies are also a great source of lighting
ideas. Simply studying still images from films to see where light is coming from or what col-
ors are being used can help you create a more interesting lighting environment for your
scene.
Lighting Passes
Using multiple lighting passes to render your scene gives you greater control over the final
look of your environment. For example, you can render a scene totally in one raytraced pass,
but if you want blurred reflections or a change in the depth of field, the only option is to
rerender the entire scene. On the other hand, if you have a separate reflection pass, a z-depth
pass, and so on, you can manipulate those layers and combine them to get the look you want
without having to rerender.
If you're working for a studio that has a compositing department, either you'll be told
what lighting passes they require, or you'll need to keep some good notes that describe what
each layer contains so that the compositor doesn't have to hunt you down to find out what
you've handed over. If you have characters in the set, you'll probably want to render them as
a separate layer for the following reasons:
• If there is a lighting mistake with your character or set, you don't need to rerender the
entire scene.
• You'll have an easier time lighting your character with lights that you extract from
your final set lighting without having to do elaborate light linking. (I'll discuss light
linking a little later in this chapter.)
• Memory issues involving heavy sets or characters might not permit you to load all ele-
ments at the same time.
• You can composite your character and set after all the elements are completed and use
post-processing effects to better marry your layers.
looks, reacts, bounces, and reflects in the real world in order to do a convincing job. How
can you make it obvious to the viewer that they're looking at a scene that is being lit with
incandescent bulbs instead of fluorescent tubes or that a scene is taking place on a sunny
midday afternoon rather than a cloudy overcast afternoon? The ability to distinguish
between different qualities of light is a fundamental requirement of lighting artists in this
industry. Before going any further, let's look at some basic attributes of Maya lights.
In Maya, all light types share two common attributes: color and intensity. You can map
the color attribute with a texture, animate it, or leave it as a solid color. The intensity attrib-
ute accepts single value input to control the brightness of a light source. You can also use
negative values, which result in light subtraction, and can diminish light on specific surfaces
or create shadow effects without the added render expense of calculating actual shadows.
You can also map intensity with the alpha channel of a texture, or if the alpha channel
doesn't exist, you can designate the luminance as the alpha channel.
lf you don't have 256MB of RAM on your workstation, you might encounter slow-
downs as you work through some of the examples in this chapter. With memory
being so affordable these days, consider upgrading your machine to 512MB.
Figure 10.1: The Attribute Editor for ceiling- Figure 10.2: The Create Render Node dialog
LightShape box
delete a color control point at any time, LM click the small square box to the right of
the ramp. You should attempt to get a ramp that looks similar to the one in Figure 10.3.
11. To save this light to your current project's scenes folder, select your light, choose File
Export Selection from the main menu, and name the file e x e r c i s e 1 . We'll use this file
later in this chapter.
lf you want to map the intensity attribute with a texture instead of with the color
attribute, adjust your material's Alpha Gain to control the intensity of your light.
Light Types 297
Light Types
When using Maya, you can work with ambient, directional,
point, spot, and area light tools. In this section, I'll describe
these light types, explain the properties that are specific to each,
and suggest possible uses.
Ambient Light
Ambient light can best be likened to light that strikes an object
from every direction, as in the dim, directionless light that exists
just after sunset. In Maya, you use the Ambient Shade attribute
to control how light strikes the surface of an object. Adjusting
the value between 0 and 1 results in a corresponding shift from
directionless light to light emanating outward from a single
point. Figure 10.5 shows some Ambient Shade settings using
scanline rendering. Figure 10.3: The Ramp attributes
Although ambient light behaves similarly to a point light when you set Ambient
Shade to 1, at render time, bump mapping is ignored, which results in flat surfaces
that may not properly represent your object's surface attributes.
Unlike other light types, ambient lights do not emit specular light regardless of the
Ambient Shade setting. Also, it is not possible to create depth map shadows with ambient
No Decay Linear Decay Rate Quardatic Decay Rate Cubic Decay Rate
Figure 10.4: Different decay rates
298 C H A P T E R 10 • Effective Natural Lighting
light, but raytraced shadows are supported. (I'll discuss shadows in detail later in this
chapter.) If Use Ray Trace Shadows is turned on for an ambient light, the far side of an
object will not receive any light during raytrace rendering, leaving Ambient Shade to affect
only that area of a surface exposed to light rays.
To produce photo-realistic results, many lighting artists shun the use of ambient lights
in setups. Lightening shaded areas with other light sources produces better results than using
the Ambient Shade attribute. Ambient lights flatten the look of the final image, especially
when used to illuminate concave areas, such as the inside of a character's mouth or the nos-
trils where flat lighting would not be commonly found.
Directional Light
You can think of directional light as a light that is emanating from an infinitely great dis-
tance, resulting in parallel rays. Sunlight hitting the earth is as close as you can get to an
example of directional light. It doesn't matter whether you're standing in your backyard or
on the other side of town, by the time the sun's rays reach you, they are nearly parallel. Also,
sunlight's intensity does not change measurably from one part of town to another, resulting
in a negligible decay rate. Directional light in Maya does not decay, making it a suitable light
type for re-creating sunlight and moonlight.
Directional light is useful for lighting vast areas in a scene. On the other hand, you can
just as easily affect small areas of your scene by using light linking or objects, such as a wall
with a window frame, to mask the light. Since directional light does not have a starting point
from which light emanates, objects positioned in front of, behind, above, or below a direc-
tional light are lit identically with respect to intensity and direction.
Point Light
A point light emits omnidirectional light from a single point in space. Point lights are useful
for simulating light sources such as candles, incandescent light bulbs, or spill light emanating
from an isolated brightly lit surface, such as a theater stage that is lit with a bright narrow
spotlight.
Spotlight
Spotlights illuminate a cone-shaped volume with the source being located at the tip. You
define the shape of a spotlight by giving it a cone angle in degrees. Penumbra angle and
Light Types 299
Figure 10.6: An intensity curve spotlight (on the left) versus a nonintensity curve spotlight (on
the right)
dropoff are two additional properties that give you control over how the edge of the light
tapers off. A positive penumbra value fades the light intensity outside your original cone
angle in degrees, whereas a negative value tapers the edge inward by the value entered in
degrees. Dropoff defines how light is distributed within the cone angle. A value of 0 distrib-
utes the light evenly throughout the cone angle. Increasing the dropoff diminishes light inten-
sity toward the edge of the cone.
With spotlights, you can easily create intensity curves from the Light Effects section of
the Attribute Editor. You can adjust intensity curves in the Graph Editor and control the
intensity of light at any given distance from your spot's origin. This is extremely useful for
gradually fading lights in and out to create a more photographic lighting situation while
avoiding clipping. A perfect example of clipping can be seen near the origin of the spot in the
image on the right in Figure 10.6. In the image on the left in Figure 10.6, intensity near the
origin tapers off more as would be seen by the naked eye.
The contrast ratio perceived by the human eye is far greater than any film stock or
video technology, so you might think that there should be hot spots or blown-out
areas in the frame, to more closely match what you'd see on film or video. I agree,
but keep such areas to a m i n i m u m . Most photographers and cinematographers
strive to reduce glare and burn using reflectors, scrims, and blockers. Clipping, or
over lighting, in computer graphics has a completely different look than that of film
or video and needs to be minimized to avoid calling attention to itself.
Area Light
With area lights, you can create diffuse light sources just as professional studio photogra-
phers do using soft boxes or bounce flash. They use these techniques to get soft, pleasing
shadows and highlights. Used effectively, area lights can be great for simulating light from
fluorescent tubes and panels, from bounce cards, or from window light. Area lights are rep-
resented as rectangular planes with a line segment protruding from the center to designate
their illuminating side. You can shape them using the Scale tool to increase or decrease the
apparent emission area resulting in an increase or decrease of the intensity of the light as
well. The combination of the Intensity attribute and the size of the plane dictate the intensity
of the area light.
Decay rate also plays a role in controlling how light is emitted. Figure 10.7 shows how
increased decay rates result in increasingly spherical light emission, a result of the falloff
beginning from the area light center and not at its edge.
300 C H A P T E R 10 • Effective Natural Lighting
No Decay Linear Decay Rate Quadratic Decay Rate Cubic Decay Rate
Figure 10.7: Different decay rates with area lights
You'll get better results by setting the Decay Rate attribute to No Decay. By default,
area lights decay using a quadratic falloff from the edges of the light-emitting plane.
Light Linking
One feature that you'll find invaluable is the ability to selectively designate a light to shine on
specific objects within your scene while foregoing others. This is termed Light Linking and
plays a major role in creating photo-realistic lighting situations. Use the Relationship Edi-
tor's Light Centric Light Linking menu set to create the connections between lights and
objects, by selecting the light in the Light Sources list and then selecting the component(s)
that you want to link it to in the Illuminated Objects list. Or you can physically select your
light and objects within your scene, and then from the Rendering menu set, choose either
Lighting/Shading Make Light Links or Lighting/Shading Break Light Links to link or
break the connection between a light and an object. These two methods are the most com-
mon ways to create the necessary connections between lights and objects within scenes.
7. Choose Window Rendering Editors Render View from the main menu to open
your Render View.
8. From your Render View menu, choose IPR IPR Render shotCam. When the ren-
der is complete, you might suspect that something went wrong since everything is
black except for the ceiling bulbs and the lampshade. Actually, your decay rate is set to
Quadratic, so you'll need to increase the light intensity before you see any detail. LM
click and drag out a rectangular box in your Render View just below the ceilingLight,
to force the IPR renderer to update this area as you make changes to your light.
9. Reselect your ceilingLight from the Outliner or Hypergraph.
10. Choose Window Animation Editors Graph Editor to open the Graph
Editor. Choose View Frame All to fit the curve in the editing window. You should
have a curve resembling that in Figure 10.8. What you have is a graphical representa-
tion of the ceilingLight Intensity attribute; the vertical axis corresponds to the ceiling-
Light intensity, and the horizontal axis represents the sample distance from the light
source.
The height of the room is about 22 units, so any adjustments to keys past a
sample distance of30 units won't affect your scene.
11. Select the first four keys and raise them until you start to see the brick texture of the
wall under the light.
12. Select the key at a sample distance of 0, and decrease it until you eliminate the clipping
from the wall. You may need to adjust the sample distance of the remaining keys until
you are satisfied with how the light starts to fall off.
13. Once you get something similar to Figure 10.9, choose Edit Duplicate from the
main menu. Set the Number of Copies to 2, ensure that Geometry Type is set to Copy,
and ensure that the Group Under option is set to Parent. Click Duplicate to complete
the operation.
14. As you did in step 2, select both new lights and the ceiling, and then select Break Light
Links from the Lighting/Shading menu.
15. Via the Hypergraph, delete the constraints that have been copied under your lights,
and constrain your copied lights to the ceilingLamp02 and ceilingLamp03 group nodes
as you did with your original in step 4.
Figure 10.8: The intensity curve for ceilingLight in the Graph Editor
302 C H A P T E R 10 • Effective Natural Lighting
16. To control the color of the ceiling lights globally, we'll connect the ramp texture feed-
ing your ceilingLight to the other two copies. In the Outliner, choose Display
Shapes, and select the ceilingLightShape node.
17. Open the Hypershade and click the Show Upstream Connections button. Your nodes
and connections should look like those in Figure 10.10.
18. MM drag and drop the ceilingLightlShape node from the Outliner to the Hypershade
Work Area.
19. In the Work Area, MM drag the ceilingLightRamp node over the ceilingLightlShape
node and release. Select a color from the pop-up menu. This operation connects the
ceilingLightRamp to your duplicated ceiling light. Figure 10.11 shows that now both
ceilingLightShape and ceilingLightlShape share the same color texture.
20. Repeat steps 18 and 19 for ceilingLight2Shape.
21. You'll want to create new intensity curves for your duplicated lights since these connec-
tions were not copied when you duplicated your ceilingLight. Before you do, open
your ceilingLight Attribute Editor and click the Input box next to the Intensity slider
(alternatively, select the Intensity Curve tab in the Attribute Editor) to open the Intensi-
tyCurve Attribute Editor, as shown in Figure 10.12. This figure shows a tabulated ver-
sion of the keys you adjusted earlier through the Graph Editor. Click Copy Tab at the
bottom to create a floating window that you can use as a reference when you change
the values for the other two lights.
22. Create a new set of intensity curves for ceilingLightl as in step 6, but in the LightInfol
Attribute Editor, click the output connections button found next to the lightInfo input
box to access the IntensityCurvel Attribute Editor.
23. Copy the Intensity Curve values from your floating window for ceilingLight to your
ceilingLightl IntensityCurvel Attribute Editor. Repeat steps 22 and 23 for ceiling-
Light2.
24. Lets add one other effect to the light before we render the scene. In the Light Effects
section of ceilingLight, click the map button next to Light Fog to open the lightFog
Attribute Editor.
Figure 10.9: The ceiling light with adjusted intensity curves Figure 10.10: Hypershade connections for ceiling
LightShape
Light Types 303
25. Adjust the Color attribute to match the color of ceilingLight, and decrease the Density
value significantly (you'll see why once you start rendering).
26. IPR render from the shotCam perspective, and adjust the intensities and fog density
until you get something close to Figure 10.13 as your result. If you don't like the light
rings created by the ceilingLightRamp, access the ramp attribute through any of the
ceiling lights. By adjusting the one texture, you control the rings of all three lights.
27. Group the three lights by pressing Ctrl+G, and rename the group to ceilingLights.
28. Save your scene as e x e r c i s e 2 .
You now have a starting point for your set lighting. You'll get to layer on other lights
from other sources as we move through this chapter. It's easier to do lighting in stages, plac-
ing lights that have a definite direction or come from a motivated source first. This makes it
easier to judge whether you're on the right track. Adding fill and bounce lights is a refining
process that comes later.
Shadows
To get realistic results from your light setups, you'll need to master the use of shadows. You
can have the best textures, the right colors, and the perfect intensities and decay rates for your
lights, but if proper consideration is not given to shadowing the scene, it will lack realism.
Knowing when to and when not to use shadows, and what kinds of shadows work best for the
job, will save you precious rendering time. In Maya, you can render using either depth map or
raytrace shadows. Although true transparency and refraction is possible only through raytrac-
ing, depth map shadows are still the fastest and most economical way to light your scenes.
Many CG film production companies still use depth map shadows almost exclu-
sively to obtain high-end images. Considering that 2KB images can take hours to
render a single frame even in scanline, raytracing is often many times slower and
is usually considered only as a last resort when acceptable results can't be
obtained otherwise.
You can toggle depth map shadows in the Attribute Editor of a light's shape node
under Depth Map Shadow Attributes. Dmap Resolution represents the width and height of
your depth map in pixels to give you a square depth
map. The larger the resolution, the more refined the
depth map, but the longer it will take to generate this
map. The combination of maximizing the usable area
of a depth map and keeping the resolution as low as
possible results in faster rendering times with high-
quality results. As a starting point, use a resolution at
which objects in your depth map will appear roughly
the same size as the objects in your final rendered
image, as illustrated in Figure 10.14.
The Use Mid Dist Dmap toggle gives you the
choice of working with either min or mid distance
depth maps. Understanding how both types of depth
Figure 10.14: When you want crisp shadows, an object map shadow types work will help you get better
should occupy an equal or greater area in the depth map results in your lighting.
than that of the final rendered image.
With min distance depth maps, Dmap Filter Size also influences your Dmap Bias set-
ting. As you increase the Dmap Filter Size, you increase the softness of the shadow in all
three dimensions. This added softness spills over to the surface that is casting the shadow
and begins occluding it once again. If you are using the default Dmap Bias together with a
high Dmap Filter Size applied to the shadow, pixel jitter can occur on the surface. Some
tweaking is needed to find the right Dmap Bias value for a particular light.
lf you test render a scene at a lower resolution, make it a point to test a few frames at
final resolution just to rule out pixel jitter. lncreasing resolution for final render may
require increasing your depth map resolution as well. Remember that this will require
an increase in Dmap Filter Size to get the same results with a larger depth map.
Shadows 307
Figure 10.18: Moonlight color Figure 10.19: Moonlight through the blinds
Dissipating Shadows
The phenomenon of light spilling into an occluded area causing shadows to grow softer as
they fall away from an object is referred to as shadow dissipation. The larger the emitting
light source, the faster the shadow dissipates. In Maya, you can create these shadows using
both scanline and raytrace rendering methods. You will find that depth maps are more work
to set up; however, raytraced shadows take much longer to render. In raytrace rendering, dis-
sipating shadows are supported by all light types, including ambient lights. Simply adjust the
Light Radius value under Raytrace Shadow Attributes to control the size of your emitting
source. Just to give you an idea of how much time you can save by rendering in scanline, the
raytrace image on the left, in Figure 10.20, took more than 8 times longer to render than the
image on the right using a depth map.
If rendering time is not an issue and your project involves providing several still
images, you'll probably get better results rendering in raytrace. Dissipating shadows will
occur naturally, depending on the size you specify for your source, the proximity of one
object to another, and so on. Dissipating shadows in scanline, on the other hand, can be
effective in some cases but fall far short in others. You can decide for yourself when to use
them in your scenes once you get a firm grasp of how they work.
Creating dissipating shadows in scanline is similar to increasing the Dmap Filter Size of
a shadow behind an object. In Maya, you create an anim curve, with the Dmap Filter Size
paired against distance from the light source. We'll look at how to do this later in this chapter.
Raytrace Shadows
You can achieve more realism from your renders by using raytrace shadows; the tradeoff
though is longer render times. In addition, if you render one light using raytrace shadows,
you can't combine depth map shadows on the same render pass. Raytraced shadows can be
composited together with a scanline render using image-processing software. Figure 10.21
shows how a raytrace shadow pass composited with a scanline rendered image can result in
Shadows 309
a. b. c.
figure 10.21: A raytrace shadow pass (on the left), composited with a scanline rendered image (in
the center), can give you high-quality shadows as a final result, as in the image on the right.
more realistic shadows. To obtain a shadow pass, assign a Use Background material with no
reflectivity, and set the Shadow Mask to 1 for the objects in your scene. The resulting image
contains the shadow information in the alpha channel, as shown in the image on the left in
Figure 10.21.
Figure 10.23: Hypershade connections for lampArea Figure 10.24: Making a connection in the Connection
Editor
Shadows 311
10. From the main menu, choose Modify Transformation Tools Proportional Modifi-
cation Tools In the Tool Settings, select Curve as the Modification Falloff, and select
Create New from the drop-down menu to the right of the Anim Curve box.
11. In the Outliner, deselect DAG Objects Only from the Display menu, and rename your
propModAnimCurve to lampAreaDmapFilterSize. MM drag and drop this node into
the Hypershade.
We'll need to connect the LightInfo node's sample distance to the lampBotDmapFilter-
Size node's input so that at render time, Maya will know where the light is positioned in world
space and apply the filter size you designate at the sample distance from the light's origin.
12. MM drag and drop the lightInfo node over the lampAreaDmapFilterSize node.
13. In the Connection Editor, select sampleDistance from the lightInfo node Outputs col-
umn, and select input from the lampAreaDmapFilterSize Inputs column to make the
connection, as shown in Figure 10.24.
14. MM drag and drop the lampAreaDmapFilterSize node over the lampAreaShape node,
and select Other from the pop-up menu to gain access to the Connection Editor once
again.
15. Connect the output from lampAreaDmapFilterSize to the DmapFilterSize attribute of
you lampAreaShape. Figure 10.25 shows what your final connections should look like
in the Hypershade.
16. IPR render the scene, and adjust the Intensity Curve and the Dmap Filter Size of your
light until there is no visible clipping on the wall or couch closest to the lamp. Even
though we're using mid distance depth maps, you'll need to increase the Dmap Bias set-
ting to compensate for the increased blur at the far end of the room. Also, make sure that
some light reaches the back wall. You should finish with something like Figure 10.26.
17. Duplicate lampArea, and in the Attribute Editor, switch the type to Spot Light and
assign a new intensity curve to it.
Figure 10.25: Final connections for lampArea Figure 10.26: The IPR render of lampArea
312 C H A P T E R 10 • Effective Natural Lighting
18. Change the type back to Area Light once again. You may have noticed that Maya
restored the name of your lampArealShape to areaLightShapel in the Attribute Editor
upon switching types. Rename it back to lampArealShape to avoid possible confusion
later.
19. Turn off Illuminates by Default, and Ctrl+select the set: wallN group node from the
Outliner. Under the Rendering menu set, choose Lighting/Shading Make Light Links
so that lampAreal affects only the wall.
20. Rotate and position the lampAreal until it resembles Figure 10.27.
21. IPR render camShot and adjust the intensity curves to get an even lighting behind the
lamp. When you've gotten results that are similar to Figure 10.28, continue with the
steps in the next section.
Turning off llluminate by Default turns the light off in the scene, and then light
linking it to specific objects in the scene results in only those objects being lit.
6. Set your lampBot Cone Angle to 90, and set the Penumbra to -15.
7. Make sure lampBot is selected, and then in one of your working views, choose Panel
Look Through Selected. Move in or out until your cone angle roughly fills the lamp
opening.
Figure 10.28: Two area lights simulate the soft light from
the floor lamp.
4. You'll need to dissipate the shadow somewhat as it stretches up to the ceiling. Using
steps 9 through 15 from the "Lighting the Couch, Wall, and Coffee Table" section as
your guide, create the setup for dissipating shadows.
5. IPR render your scene and activate a region above the lamp to adjust.
6. In the Graph Editor, adjust both the intensity curve and the Dmap Filter Size curve
until you have something that looks similar to Figure 10.31.
Creating Soft Light for the Walls and Floor from the Lampshade
Earlier we simulated light from the lampshade hitting the walls and couch. Now we need to
simulate light from the lampshade on the floor and ceiling. This process will brighten the entire
scene and make the light emanating from the lamp more convincing. Follow these steps:
1. Create a spotlight with an intensity curve.
2. Change the type to Point Light, and position it at the center of the lamp, making sure it
is placed at the same height on Y as the two area lights.
3. In the Outliner, rename the light to lampshadeFill.
4. Change the color of the light to a pale yellow like that of the two area lights, and
ensure that you have Use Depth Map Shadows turned on.
5. Break the light link between lampshadeFill and the lampshade, lampStem, and both
LampSupRod groups to ensure that no lamp objects generate unnatural shadows or
occlude the very light we're trying to set up.
6. IPR render out the scene and adjust the intensity curve until you get something similar
toFigurel0.32.
7. Add a DmapFilterSize curve to your light to create dissipating shadows.
8. In the Graph Editor, set up your curve to start from a DmapFilterSize of 1 at a sam-
pleDistance of 5, and increase both from there. If you want precise measurement
between points in your scene, use the Distance Tool under Create Measure Tools
from the main menu and then simply follow the Help Line tips.
Figure 10.31: The final IPR look of lamp Top Figure 10.32: The dim light of lampshadeFill should add a
bit more detail to the room while adding much needed
light to the floor and ceiling.
Shadows 315
A good way to select bounce color is to render out, or load an image In the Render
View, from the latest setup and select your color from the area your light should
emanate from, using the Eyedropper tool in the Color Chooser.
5. Ensure that you're using a quadratic falloff as your decay rate. Turn on Use Depth
Map Shadows, and de-select Use X+ Dmap, Use Y- Dmap, and Use Z+ Dmap since we
only need a shadow to cast from the beam against the back wall. We'll tweak this
light's intensity curve once we place a few more lights in the scene
6. To diminish the contrast on the far wall created by the shadow of the couch, add a
point light near the far arm of the couch.
7. Name this light bounceFromCouchEastWall.
8. Use a desaturated beige, paler than that used to define the bounceFromFloorLamp light
color, set Decay Rate to Linear, and turn off Emit Specular once again.
9. Light link it exclusively to the wallEast and blinds group nodes inside set.
10. Turn off Use Depth Map Shadows for this light and keep the intensity very low (an
intensity of 1 should suffice because this light serves only as a subtle fill).
11. Now we need to add a bounce light for the ceiling. Create an area light and call it
bounceWallNorthOnCeiling.
12. Assign it a desaturated yellow color, turn off Emit Specular, and light link it to only the
ceiling.
13. Position bounceWallNorthOnCeiling halfway between the couch and the ceiling
against the north wall. Center it below the middle ceiling light, and scale on X so that
its ends stretch past the first and last ceiling lights. Remember to rotate the light on Y
so that it is facing the room.
14. We'll create a light to add a hint of detail to the far side of the pillar. Create an area
light, name it bouncePillarFarSide, and scale it on all axes by 4.
15. To position bouncePillarFarSide, choose Look Through Selected in your working view
Panels menu, and then orient the view to see the dark side of the pillar and a majority
of the room.
16. Light link bouncePillarFarSide to only the wallNorth group and ceiling, and disable
Emit Specular in the Attribute Editor. You won't need shadows for this light.
17. The last light you'll need to add for this part will be used to fill in pure black areas on
the floor. Create a spotlight called fillFloor with a linear decay rate, and position it
near the wallW geometry. Turn off Emit Specular, and light link the spotlight exclu-
sively to the floor.
18. While looking through your light, you should have the floor below the pillar centered
in your light cone. Set the Penumbra Angle to the negative value of your cone angle to
make the light fade out gradually toward the edge. Your resulting light positions
should resemble those in Figure 10.35.
19. IPR render the scene with your lights, and adjust the intensities and intensity curves
until your results look like Figure 10.36.
lf you find that any of your lights have become too intense, "blowing out" the
scene, be sure to go back and adjust them accordingly.
Shadows 317
Figure 10.37: The fmal placement of bounce lights on the couch within the scene
Figure 10.39: All lights including the tvFill complete the set lighting.
3. Give the tvFill a pale blue color, select Use Depth Map Shadows, and set Dmap Filter
Size to 8.
4. IPR render your scene. Adjust the intensity until you get something similar to Figure
10.39. If you want, you can use a noise function to vary the intensity of the tvFill light
to simulate the varying intensity of a television set.
5. Group this light under a group node named tvLight.
6. Select all the light groups you've created thus far and group them. Call this group
lights, and save your scene file as e x e r c i s e 5 .
lf you followed the steps closely, you now have a complete light rig. lf you've never
paid much heed to using descriptive naming standards or structure in your work,
you'll find your self lost amid the many lights that constitute a setup similar to this.
By grouping lights in an organized fashion, and not throwing them throughout your
scene, you can easily navigate and understand what each light's role is in the lighting
of your set. This is important when working with others who may need to work with
your light setup or if you have a problem that needs troubleshooting.
Light Rigs
Because the process of creating a good lighting setup can be time consuming, a structured
workflow is crucial. Essentially, a light rig is a set of lights parented under either a group or a
locator top node and positioned for a specific element in a scene or for a particular layer.
This node can be constrained to a target element so that all lights follow its movement, as
you'll see in the next section. This type of setup allows for unlimited movement of the sub-
ject in the scene without concern for shadow map imprecision. Using one large shadow map
that encompasses a large area of a scene limits how close you can get to the subject before
experiencing pixel jitter or other nasty rendering artifacts. Also, a constrained light rig can
allow for lower shadow map resolutions for subjects that occupy a smaller area on the
screen.
320 C H A P T E R 10 • Effective Natural Lighting
Once you tweak and position a lighting rig for a particular subject in a scene, you can
duplicate it and constrain it to other elements within the same shot or to elements in other
shots that have similar ambient lighting conditions.
We'll use the shadows generated by lampshadeFill and plug them into lampShade-
FillPuppet, because they are both identical light types and are positioned in exactly the same
location. Moreover, lampshadeFill generates shadows containing depth information for the
puppet and the rest of the set, making it possible for shadows from the couch to fall on
the puppet as she walks past it.
8. Ensure that lampshadeFill has Reuse Existing Dmap(s) set in the Disk Based Maps
drop-down menu in the Attribute Editor and that both lampshadeFill and lampshade-
FillPuppet have Dmap Resolution set to 512.
9. Deselect Dmap Scene Name and Dmap Light Name, and toggle on Dmap Frame Ext
for both the lampshadeFill and lampshadeFillPuppet.
You might want to consider deselecting Dmap Frame Extension when performing
IPR renders. The IPR render seems to have issues with depth maps using frame
extensions and does not employ them when fine-tuning a region.
Light Rigs 321
Be sure to display shapes (choose Display Shapes) in the Outliner, or you may
not be able to load the shape attributes for the LampshadeFillPuppet or lampshade-
Fill.
12. Let's add a bounce light that will travel with our puppet as she walks through the set.
Create a spotlight called bouncePuppet with No Decay and Emit Specular de-selected.
Give the spotlight a color similar to the brightest part of the floor.
13. Select Use Depth Map Shadows with a Dmap Filter Size of about 6.
14. Light link the spotlight exclusively to the puppet, and position it below and behind the
puppet as shown in Figure 10.40.
15. Since our character moves through the set, the light position will be correct for only a
couple of frames out of the whole shot. To correct this, create an empty group and
point constrain it to the waist joint node under the puppetSkel group inside puppet.
This group will follow the puppet for the duration of the shot.
16. MM drag bouncePuppet into this new group in the Hypergraph or Outliner, and
presto—the light now follows the character for the length of the shot.
You might want to throw out any depth maps that were generated earlier in this
chapter to avoid shadows that are in need of being regenerated as a result of the
addition of the puppet character.
17. IPR render the scene and adjust the light intensities until your results resemble Fig-
ure 10.41.
Finishing Touches
You may not have reflections like
the image in Figure 10.42, but with
a little more work, you can get Figure 10.41: The final lighting of the puppet character
similar results. Getting reflections and set
Figure 10.42: The final lighting of the set with simulated reflections on the floor and in the window
Summary 323
in scanline rendering sometimes requires reflection maps. However, when you have a large
area that needs a reflection, such as a floor, inverting your set on Y, repositioning lights, and
adding a little fog can get you better results than raytracing your scene. If you don't have
compositing software, a well-selected transparency setting will give you control over how
much of a camera image plane you allow to show through a surface.
For the reflection in the floor, I copied my final scene file and inverted the set node on
Y. Some lights needed rotation on X, and others, such as the area lights, needed to be reposi-
tioned completely. I used fog on height to obscure the reflected image on Y to avoid a perfect
reflection typically associated with raytracing. Once the image was rendered, I mapped the
image to an image plane in the final scene and added some transparency to the floor to allow
the image to bleed through. I used the same procedure for the reflection in the glass at the far
end of the room.
You can get the same results in many ways. This method is simply one that works well
in most cases. What's important is to know when to stop and when to go that extra yard. In
this particular case, it made sense to add reflections in the floor and in the window. You can
make other tweaks and adjustments to improve the look of this and any lighting scene, but
as we all know, time is of the essence in any production.
Summary
The focus of this chapter was to help you develop a better understanding of lighting basics
while improving the quality of your photo-realistic lighting work through practical exam-
ples. I discussed how to simulate light from motivated and bounce sources using Maya's
variety of light types, emphasized the importance shadows play in creating natural lighting
environments, and covered different methods of light linking. The techniques employed in
this chapter provide some alternate approaches to using Maya's tools and should help
improve the quality of your work.
Distributed Rendering
Timothy A. Davis
Jacob Richards
incredible power at their fingertips. Consequently, billions of CPU cycles are wasted each
day. Fortunately, rendering is incredibly CPU-intensive and can take advantage of those
wasted cycles, often bringing the most powerful systems to their knees.
Although Maya comes bundled with distributed-rendering software for Unix-based
machines, it does not include any such software for Windows machines. Considering the
power of such machines, a Windows-based distributed renderer could prove highly useful.
For this reason, in this chapter we'll focus on developing distributed-rendering tools for PCs
running Windows.
We'll begin by discussing the distributed renderer that is packaged with the standard
Maya release and some of its potential pitfalls. Next, we'll make some suggestions about how
to get the most out of your network for Maya rendering, and then we'll turn to the details of
our network renderer (called Socks) for Windows machines. During this discussion, we'll
describe how to use this system and what's going on under the hood. In the end, we hope you'll
have a better understanding of distributed rendering and how you can make it work for you.
Figure 11.1: In image-level subdivision, a single image is divided into smaller regions that can be ren-
dered by different machines.
The Maya Dispatcher 327
Figure 11.2: ln frame-level subdivision, a multiframe animation is divided into frames (or sequences
of frames) that can be rendered by different machines.
On a frame level, each machine renders a single frame, or a subset of frames, in an animation
sequence, as shown in Figure 11.2.
Since most large rendering tasks involve animations, we'll focus on rendering at the
frame level. So, let's assume you have now started a render for a single frame on each
machine in the network. As frames of the animation complete, you need to collect them and
possibly move them to a single machine (if the machines do not share a file system). You also
need to watch the render tasks so that as soon one finishes on a machine, you can start
another render on that machine.
As you can imagine, this process is tedious and time-consuming. And, hey, you have
more important things to do (such as fix the mistakes you found in the frames that have ren-
dered so far). Fortunately, Maya offers you some help on Unix systems with its bundled dis-
tributed renderer called Dispatcher. Dispatcher can run on any machine within the network
to control distributed rendering from a single centralized location.
average load per computer. With these features, you can start a large render, leave it running,
and come back later to see the results. Dispatcher takes care of most of the pesky details.
Dispatcher also attempts to save some load time by sending each computer what it
calls a packet of frames. A packet is really nothing more than a request for the computer to
render a certain number of consecutive frames. Thus, the computer has the geometry, tex-
tures, and any other necessary information already loaded into RAM from the previous
frame and does not have to access the disk or network to get this data to render the current
frame. This makes sense when you consider, for example, that it takes about 15 minutes to
load a 15MB scene file and 500MB of textures over a 10Mbit line for each frame you want
to render. Even at 100Mbit speed, it takes about 2 minutes to load the scene. Therefore, you
waste 1 hour of render time for every 1 second of animation produced at 30fps.
Dispatcher is also nicely integrated in Maya's interface. In the Render drop-down
menu in the main Maya window is an option called (Save) Distributed Render. When you
select this option, Maya displays the Save/Render window (see Figure 11.3) that asks for a
filename for your current scene.
When you select a filename, Maya displays the Submit Job window (see Figure 11.4),
in which you will find a number of useful options for rendering your animation. You can edit
the distributed render command line, set the start and end frames for rendering, select a ren-
dering pool of machines, and give your job a priority of 0-100 (with 100 being the most
important and 1 being the least; 0 is used for suspending the job).
Maya also opens another window titled The_Dispatcher_Interface (see Figure 11.5) in
the background that displays your jobs and the machines they are running on once the render
begins. The first section of this window shows each host, the job running on it, and the current
frame number it is rendering. The middle section shows similar information, but based on job
ordering rather than on host ordering. The bottom section lists jobs waiting to be rendered.
The_Dispatcher_Interface window also contains several drop-down menus: File,
Hosts, Jobs, and Pools. We'll discuss some features of the options on these menus below,
but for a complete description, see the online documentation.
Choose Hosts Configure to open the Host
Configuration window, as shown in Figure 11.6. In
this window, you can configure the client machines on
which you want to render. The Host list box contains
all the machines that the Dispatcher has access to and
allows you to choose the machine you want to config-
ure. Clicking the Enabled button lets Dispatcher use
this machine in the distributed render; otherwise, Dis-
patcher is denied access to this machine.
You use the options in the middle section of the
Host Configuration window to restrict use of the
Figure 11.3: The Save/Render window allows Figure 11.4: The Submit Job window provides a means for
you to specify an output file. launching a distributed render.
The Maya Dispatcher 329
Setting the Maximum Jobs number higher than the number of processors on the
machine can cause serious processing delays. That is, doing so will start N renders,
which could slow down your machine exponentially as context switches between the
processes eat up CPU cycles, even if those processes are set to low priorities with n i c e .
330 C H A P T E R 11 • Distributed Rendering
Dropped Frames
Unfortunately, if one of the computers in your pool drops a frame, Dispatcher may have no
idea that the frame is not complete. It just knows that Maya is not running on that computer
and sends another frame for it to render. Obviously, this is not good since the machine is
apparently experiencing a problem with the file or is experiencing some system error.
Dropped frames can be especially annoying since f c h e c k and Adobe's After Effects
may choke when trying to display an incomplete sequence of consecutively numbered
images. A smarter Dispatcher would detect that the render did not complete successfully and
send the frame to be rendered again. Whether it should send the frame to the same computer
or a different (hopefully working) machine is up to the programmer.
If a machine is constantly dropping frames, you might want to remove it from the ren-
der farm pool until you can figure out what's wrong with it. It could be that the computer
can't access a file or doesn't have enough RAM to hold the current file. Or maybe another
The Maya Dispatcher 331
process running on the machine is using 99 percent of the processor. Who knows? The point
is that many situations can prevent a machine from completing a render.
One nice feature of Dispatcher is that when it does detect a dropped frame, it sends
you an e-mail message telling you which frame was dropped by what machine and when. In
some instances, though, the machine locks during rendering, and Dispatcher does not detect
the lock-up. Instead, Dispatcher appears to think that the machine is rendering a particular
frame for hours upon hours, so the frame never gets reassigned to another machine. The
result is a waste of a lot of machine hours in addition to the dropped frame.
Incomplete Frames
Occasionally, the Maya renderer crashes in the middle of a frame render, leaving the image
output file only partially complete. Currently, Dispatcher does not check for such situations;
therefore, no corrective action is taken. It may well be that the machine that just experienced
the render crash will be given a new task as if nothing had happened.
This situation is not usually detected until you view the frames with f c h e c k or some
other animation viewer. Although you might notice a dropped frame in a directory listing, an
incomplete frame can be harder to detect unless you look at a detailed listing that shows all
the file sizes. Sometimes, nothing is written to the file, so its size is 0, which is relatively easy
to spot in a listing. At other times, though, the frame is closer to complete, so its file size
doesn't seem out of line with the others. This is especially true with i f f files, the sizes of
which are not constant across frames.
The file_name is the prefix name of the frame filenames, which must be in the format
< f r a m e > . < f r a m e _ n u m b e r > (similar to f c h e c k ) . Figure 11.8 shows example output of the
program. The program checks for three types of problem frames:
• Missing frames
• Empty frames
• Suspicious frames
Missing frames are detected by a skipped frame number in the list of files (for example,
b o u n c e 2 . i f f . 9 , b o u n c e 2 . i f f . 10, b o u n c e 2 . i f f . 12). You can easily identify empty frames
since they have a file size of 0. Suspicious frames include any frame that is considerably
different in file size than the mean file size of the other frames in the directory. You might
have to view suspicious files indi-
vidually to determine that they are
complete.
Also notice in Figure 11.8
that some additional information is
provided:
• Elapsed time
• Mean time
• Est. time remaining
Elapsed time is the amount of time
that has passed between the oldest
and most recently rendered frames
and is computed from the time
stamps on the files. The Mean time
is simply an average rendering time
based on the Elapsed time divided
by the number of frames. The Est.
time remaining indicates how
much time is left to render, based
Figure 11.8: The Frame Check tool provides statistics on rendered frames. on the current frames.
Tools and Recommendations 333
Recommendations
Because Maya rendering is such a resource-
intensive task, you need to follow certain
guidelines when assigning machines to par-
ticipate in the distributed render. These
guidelines are a direct result of our experi-
ences, and we hope they will help you avoid
some of the unhappy situations (and users!)
that we've encountered.
One of the items shown in the Load
Scan display is a field that indicates whether
a Maya session (interactive or render) is cur-
rently active on that machine. You'll gener-
ally want to avoid starting any render job on
a machine running an interactive Maya ses-
sion. Our lGhz Linux boxes with lGB of
RAM are barely able to handle Maya inter-
active sessions on some of our larger scene
files with nothing else running on the
machine, much less a Maya rendering
process. So, recommendation number 1 is:
don't run Maya renders on the same
machine running a Maya (or some other)
Figure 11.9: The Load Scan tool helps you iden-
interactive session.
tify machines to use for distributed rendering.
334 C H A P T E R 11 • Distributed Rendering
These guidelines do not necessarily hold for machines with multiple CPUs,
Figure 11.10: This network of machines has been divided into three mutually exclusive partitions.
The Socks Program 335
Installing Socks
Installing Socks is a simple procedure. The only requirement is that you have at least one
TCP/IP connection on a machine running Windows NT/2000/XP. As a general rule, any
machine that can run Maya can run Socks.
To install Socks, follow these steps:
1. Copy S o c k s . z i p (about 80KB) from the CD-ROM included with this book to a direc-
tory of your choice.
2. Extract the files.
3. Run the appropriate executable: S o c k s . e x e (the master process) or C 1 i e n t . e x e (the
client).
Neither executable is designed to run on any version of Windows other than Windows
NT/2000/XP; therefore, run Socks under Windows 95/98/Me at your own risk.
Figure 11.11: Socks is configured with one machine running the master process and many other
machines running the client process.
if ( A d d _ c o m p u t e r _ b u t t o n . C l i c k e d ) (
Add_computer_dialog (New_computer);
TCP.connect (New_computer.ip);
Add_computer_to_list (New_computer);
continues
The Socks Program 337
if (Delete_computer_button.Clicked) |
TCP.disconnect (SelectecLComputer) ;
Delete_Computer_from_list ( Selected_Computer) ;
if (Render_button.Clicked) (
Bui1d_Option_String ( Render_Parameters ) ;
TCP.Send (Selected_computers, "RENDER" + Render_Parameters)
Add_Render_to_list (New Render (Render_Parameters,
Selected_computers) ;
if (TCP.Message_Received) 1
switch (TCP.Message)
case "STTUS1": Update_Status ( R E N D E R I N G ) ;
case "STTUS2": Update_Status (RENDER_SUCCESS);
case "STTUS3": Update_Status (RENDER_FAILED);
case "CHATS" : p r i n t (TCP.Chats_message);
Update__Statistics ();
until (Quit_program)
Figure 11.12 shows an overview of what the master process does after the Render but-
ton is clicked. We'll provide additional details on these steps later.
System Requirements
As mentioned earlier, the master process runs on a single computer. This computer doesn't
have to be anything high-powered; the only requirement is that it must have TCP/IP installed
and be able to communicate with other machines in the rendering pool on port 61276.
lf you are running firewalls to protect machines from outside attacks, you will need
to grant access to port 61276 in order for this program to work.
In addition, the scene file to render must be on a network drive available to all
machines in the render pool. Furthermore, not only must the . m b or . m a file be available to
all the machines, any files that are referenced within the scene must be accessible as well.
These include all BMPs, JPGs, or other image files used in your scene as textures, masks, or
image planes. Any cached particles or dynamics files must be available to the remote com-
puters as well.
And don't forget MEL scripts—make sure that all machines have access to any special
MEL scripts you've used to control your scenes. Otherwise, you can end up with a large
number of incorrect frames that will have to be re-rendered.
Obviously, most of these requirements can easily be met if the source files are all located
on a file server that each machine can access. Paths to these files must be the same on each
machine, such as mapping drive S to a public folder that each machine can access. Having one
machine map the public folder as S and another machine map it as T will not work. To
338 C H A P T E R 11 • Distributed Rendering
Figure 11.12: The Socks master process performs several tasks during a distributed render.
make a MEL script available to all the machines, save your script within a procedure in the
w o r k s p a c e . m e l file in the root folder of your project. This file is read whenever you start
Maya or the command line renderer.
File Options
Computer Options
The next portion of the program contains about 80 percent of the code that is not directly
related to the interface. This may seem like a disproportionate amount for one list box and
six buttons, but hidden beneath those buttons and the list box is a host of functions:
• The TCP/IP implementation for Socks
• Error-handling routines for user mistakes and computer failures (lost connections)
• The communication protocol for the master and client computers
• File I/O routines for loading and saving settings
• The code for handling the list box and the six buttons
The list box contains the list of computers that make up the render pool. The columns
in the list box contain information that is useful when you are running a render job—the
340 C H A P T E R 11 • Distributed Rendering
computer name, its IP address, the name of the file being rendered, the connection status,
and the frame number. With these simple columns, it's easy to see what each computer is up
to and whether it has a problem.
Values contained in most of the columns are intuitive, except for ID and Status. The
number in the ID column is the identification number of the computer within Socks and is
primarily used for debugging purposes. The Status column is more involved and can take
one of six possible values that have the following meanings:
CXN FAILED The computer cannot be reached because the machine is down, not
running the client, or has one of many other problems that might be keeping it from
communicating.
LOST CXN A previously successful connection is terminated between the master and
a client machine. LOST CXN can indicate a willful disconnection of the client machine
or an error that has occurred in the connection between the master and the client.
CONNECTED You have established communication with the client machine, and the
client is awaiting orders.
RENDERING The client machine is busy rendering.
RENDER SUCCESSFUL The client machine has successfully completed the frame
that it was assigned.
RENDER FAILED The client machine failed to render the frame it was assigned. Cur-
rently, Socks does not display the reason for the render crash, but this would be a nice
option to add in the future or in your own distributed renderer.
startup. This file is saved each time you quit Socks, and it contains all the computers in the list
box at termination. Thus, you can easily enter all the machines once and let Socks load them
automatically from then on. Alternatively, you can create a text file by hand containing all the
machines in the pool. The format of each line in this file is as follows:
<computer name> <IP address> <1>
The Add and Load buttons have their counterparts in the Delete and Clear buttons. To
remove a machine from the list, select it and then click the Delete button. Clicking the Clear
button removes all the computers from the list. Finally, clicking the Save button saves the
current list of computers in a computer list file. You can use this feature to create pools of
machines that can later be added to the render farm in stages.
Near the bottom of the Socks main window are the controls for the render jobs. They
function somewhat similarly to the controls for the computer list. Each of the columns pro-
vides the user with some useful information. Table 11.1 lists and describes these columns.
Other Fields
The bottom of the main Socks window contains other useful fields and buttons, as well as
some rendering statistics information. Clicking the Render button initiates the render on the
Maya file selected according to the options chosen on the machines highlighted in the Avail-
able Computers list. You can Shift+click and Ctrl+click to select multiple machines for the
render. The selected machines can be used for different rendering tasks, though the order in
which a particular machine performs these tasks is determined by the Socks priority, which
you can set using the Priority slider located next to the Render button.
You can use the Chatting field to exchange messages with users on the client machines.
Simply select a machine from the Available Computers list, type the message in the field at the
bottom, and click the Send button. Any messages sent to you from client machines are dis-
played in the larger box with the IP address pre-pended to the message.
To use the e-mail capabilities of Socks, you need to fill in the three fields in the SMTP
Options sections. Currently, the only e-mail message that Socks sends you is notification that
a render job is complete. In the Server field, enter a fully qualified SMTP (Simple Mail Trans-
fer Protocol) server name, such as s m t p . y o u r n e t . c o m . In the Your E-mail Address field, enter
the e-mail account that you use on that server. Leaving it blank or using an e-mail address
342 C H A P T E R 11 • Distributed Rendering
lf you use the Frame Range options in the Manual Entry section, you might render
duplicate frames. Socks does not parse the Manual Entry command line, but just
blindly sends it to the client.
The Socks Program 343
If you do not want to pass any parameters to the renderer and want to use the defaults
in the file, click the Override radio button and leave the input field blank. This calls the ren-
derer with no command line options but the filename.
For information about the rest of the options, see the Maya documentation on
rendering.
w h i l e ( ( C o n n e c t e d _ t o _ M a s t e r ) A N D NOT ( Q u i t _ p r o g r a m ) ) (
TCP.Listen (61276);
if ( T C P . m e s s a g e _ r e c e i v e d ) I
switch ( T C P . m e s s a g e ) I
c a s e " R E N D R " : Start_render ( T C P . r e n d e r _ o p t i o n s ) ;
d e f a u l t : print ( T C P . m e s s a g e ) ;
}
wait ( 5 ) ;
if ( r e n d e r i n g ) (
Check_On_Render ();
TCP.Send (Status_Of_Render);
}
Do__Idle_Operations ( ) ;
\
}
until Q u i t _ p r o g r a m
344 C H A P T E R 11 • Distributed Rendering
Figure 11.17 shows the main client dialog box. As in the main master dialog box, at
the top, the IP address of the machine running the client process is displayed next to the
Running on IP tag. In the Location of Render.exe field, you specify the location of the Maya
renderer on the local client machine. The Recent Messages box displays any informational
messages from Socks, as well as messages sent from the master machine. If you are a user on
the client machine and want to send a message to the master, you can simply enter the text in
the Chat with Master field and click the Send button. The Connection Status field displays
information about the client's connection to the master process machine, and the Render Sta-
tus field displays information about the current rendering task. Clicking Quit terminates the
client process.
The Socks Program 345
Communication
When the client starts, it creates a
socket that listens for incoming
connections on port 61276, as
mentioned earlier. Once a connec-
tion is made between the two
machines, the client no longer
accepts any connections until the
current connection is broken. This
arrangement simplifies the client
process since handling multiple
connections increases complexity
exponentially. Future versions of
Socks may include the capability to
handle multiple connections. Figure 11.17: The Socks client main window displays
Fortunately, allowing only one information about current renders.
connection eliminates the possibility
that two machines running master processes can submit jobs to the same client simultaneously, in
turn causing the client machine to run two instances of the renderer. (Recall the earlier discussion
in which two instances of the same rendering program fight for CPU time and cause the final ren-
der time to be substantially longer than running the two renders back to back.) Having only one
connection also allows for an easier implementation of the communication between machines
since you don't have to keep track of which master machine you are communicating with. We'll
discuss this protocol shortly.
Another aspect of communication between the master and client machines occurs in the
form of status updates. As soon as the master computer connects to the client, the client starts a
timer that sends a status update to the master every five seconds. At these intervals, the client
process checks to see if the computer is rendering or idle. If the computer is idle, the client process
also determines whether the computer has successfully or unsuccessfully rendered a frame. The
five-second interval provides frequent updates to the master, but not so frequent as to clog up the
network or add wasted time to the render job. The packet size for these updates is roughly 20
bytes, so bandwidth should not be adversely affected.
RENDR 0/1 Options that the client will use to render(for example, raytracing,
the number of frames, the file format, and so on)
When you click the Render button, the master process builds a data string that repre-
sents the options that will be used for the command line renderer on the client machine. This
data, along with the correct header, is sent as a packet to the client machine. When the client
receives the packet, it strips out the first five characters to determine the command word, and
the sixth character to get the option.
If the option value is set to 0, a normal render occurs (we'll call this R E N D R O ) . The client
machine parses the data string sent to it from the master and combines it with the path to the
Maya renderer on the local machine to create a fully qualified command. Recall that this
data is composed of the rendering options set in the master GUI. As mentioned previously,
these values override those set in the scene file.
The client then runs the code in Listing 11.3, which creates the process, sets a
P R O C E S S _ I N F O R M A T I O N structure that is useful in other parts of the code, and sets the priority
of the render to low.
/ L i s t i n g 11.3: The Code That Launches the Render in the C l i e n t
ret = C r e a t e P r o c e s s ( N U L L , / / c r e a t e the p r o c e s s
commandline,
NULL,
NULL,
FALSE,
CREATE_NEW_CONSOLE,
NULL,
NULL,
&StartupInfo,
&ProcessInformation) ;
/ / s e t t h e p r i o r i t y t o t h e l o w e s t p r i o r i t y i n N T a n d 2000
SetPriorityClass (render_process_handle, IDLE_PRIORITY_CLASS);
After successfully creating the render process, the client sends a STTUS1 message to the
master machine. We'll discuss the STTUS command words in the next section..
Although you can send a R E N D R 1 command, it is not yet implemented. The R E N D R 1 com-
mand tells the computer to render a file that is local to the client machine and not found on a
file server. Distributed rendering of a file that is not on a shared file system is a complex
problem and requires a lot more code.
The Socks Program 347
The most useful data is in the h P r o c e s s field, which specifies a handle to the newly created
process. (You can find the meaning of the other fields on the MSDN.) With this handle, which the
client copies and stores in a private class member, you can set the CPU priority of the render run-
ning on the client machine. (This priority is different from the Socks priority explained earlier.)The
process handle also lets you determine if the render is still running and, if not, retrieve the exit code
of the process. With this information, you can determine if the process terminated cleanly or by
error. Of course, this handle also lets you kill the render if the need arises.
The meaning of a STTUS command depends on the value in the Option field:
STTUS1 Tells the master machine that the client has just started a render and will be
busy until the client sends another STTUS message telling it otherwise. This message is
sent most frequently since the client issues one of these every five seconds during ren-
dering. To conserve bandwidth, the client does not send messages when it is idle.
STTUS2 Tells the master machine that the client has successfully rendered an image and
is awaiting more commands.
STTUS3 Tells the master machine that the client has unsuccessfully rendered an image
and is awaiting more commands. The master process places this frame back into the
list containing all the frames that still need to be rendered. Unfortunately, there is cur-
rently no way to report the reason that the render failed, so the master process tries to
send that machine another frame in hopes that this was just a one-time error.
348 C H A P T E R 11 • Distributed Rendering
Figure 11.18: The Socks master and client processes communicate through a variety of messages
during a distributed rendering task.
Summary 349
CHATS None A string that you want sent to the master or client computer
The CHATS command allows two users on the network to communicate through text
messages. If someone at a client machine is noting a problem with the render or just wants to
talk to the person running the master process, the two can chat back and forth using the text
boxes provided in each program. Similarly, the person at the master machine can begin a
chat session by highlighting a client machine in the main master process window, typing a
message, and clicking Send.
Figure 11.18 shows an example communication scenario. As you can see, it is a simple
protocol (why make things harder on yourself?), but still contains room for more complex
message passing in the future. In this example scenario, the master process initially sends a
render command to the client to render frame 1 of an animation. Every 5 seconds after that,
the client sends back a status message indicating that it's still working on the render. After 30
seconds, the client informs the master process that it successfully completed the render and is
now ready for another rendering task. The master process then sends a render command to
the client to render frame 4. The client, however, experiences an error with this frame render
and sends the master process a status message with the error. At this point, the master
process places frame 4 back in the pool of unrendered frames for reassignment and sends the
client machine a new rendering task.
Summary
In this chapter, we provided information that we hope will help you make better use of your
rendering resources. A few simple tools and recommendations can make a world of differ-
ence in getting the most out of your render farm.
We also included a discussion of Socks, a distributed renderer for Windows machines.
As written, Socks can help you with your distributed rendering tasks, but our hope is that
you will be able to write your own distributed renderer that will best suit your needs. The
source code on the CD-ROM included with this book should give you a good launching
point, in combination with the description of the code in this chapter. Before you know it,
you could have dozens of people performing distributed rendering on hundreds of machines
shooting billions of light rays to render your animation!
Index
351
352 Index
B
baking simulation data, 199, 209-210, 210,
220
bind pose, 153-154
binding arm mesh to skeleton, 26, 27
Blend Shape dialog box, 161, 161
blend shapes. See lip-synched animations
bounce lights, 315, 315-318, 317-318
Boundary tool, 11-12, 11
Bublitz, Ron, 264
buttons, See also controls
Deselect All button, 285-286
in particle simulations, 257-258, 258
radio buttons, 273-274, 274, 280
Select All button, 285-286
symbol buttons, 274
in Photoshop, 19, 19
c
planning, 17 caching dynamic data, 199, 216, 331
polygons, 18-19, 18-19 Cameron, James, 89
reference objects, creating, 154-155 carapace example. See organix modeling
thumbnailing, 3, 4 cartoon character example. See lip-synched
time constraints and, 4 animations
area lights, 299-300, 300 CD-ROM
arm example. See setting up for animation files
arrays, 283 ArmSetupStart.ma, 23
Attach Surfaces Options dialog box, 14, 14 base_humanoidShape.tga, 174
Attribute Collection script, 26 Blobbyman.ma, 30-31, 31
attributes, See also color Bonus_Binaries, 81
centralizing control of, 243-245, 244 breakFinal.mov, 212, 213
character size/proportion, 181-183, 182 breakOutFinal.mov, 221, 221
Index 353
Maya ASCII files and, 174-175, 175 directional lights, 298, 307, 307-308
Maya binary files and, 174 dissipating shadows, 308, 309
modelingcharacters, 172-173, 173 distributed rendering, 325-349, See also
running scripts, 179 rendering
script architecture and, 175-178 defined, 325
shader trees/colors, 173-174, 174 dynamic simulations and, 199, 331
varying character size/proportion, Frame Check tool, 332, 332
181-183, 182 at frame level, 327, 327
varyingcolors, 178-181, 180-181 at image level, 326, 326
heroes and, 169-170 Load Scan tool, 333, 333
Horde team tasks using Maya Dispatcher
defined, 171, 186 dropped frames problem, 330-331
Maya ASCII files and, 188-189 dynamics problems, 199, 331
Maya binary files and, 189 Host Configuration window, 328-329,
positioning crowd members, 186-188, 329
187-189 incomplete frames problem, 331
setting status tags, 189-190, 190 interactive session problems, 333-334
varying crowd member animations, Jobs menu, 329, 330
189-191,190-191 overview of, 326, 327-328
improving, 192-193 Pool menu, 329-330, 330
manual vs. procedural methods, 168-169, rendering licenses, 330
169 Save/Render window, 328, 328
modifying flexibly, 168-169, 191, 792 sending packets of frames, 328
namingcomponents, 173-174 setting maximum jobs, 329, 329
overviewof, 167-168, 193 specifying idle time, 329, 329
patience and, 172 Submit Job window, 328, 32S
task forces for The_Dispather_Interface window,
animating crowd as whole, 186-191, 328-330, 329-330
187-192 thrashing problem, 334, 334
animating variable characters, overview of, 325-326, 349
183-186, 184-185 recommendations, 333-334, 334
modeling variable characters, using Socks
172-183, 173-175, 180-182 CHATS command, 349
overview of, 170-171 chatting field, 339, 341
CVs (control vertices), 8 client process, 335, 336, 343-349,
344-345, 348
commands, 345-349, 348
D computer options, 339-341, 339-340
D'Arrigo, Emanuele, 167 defined, 335
Davis, Timothy A., 325 exiting, 339, 342
decay rate of light, 296-297, 297, 299-300, file options, 339, 339
300 GUI, 338
depth map shadows, 304-306, 304-306 installing, 335
Derakhshani, Dariush, 119 main client window, 344, 345
diffused light, 309-312, 310-311, 314-315, main master window, 339-342,
314-315 339-340
dinosaur shattering glass. See dynamic rigid master process, 335-343, 336,
body simulations 338-340, 342
356 Index
properties, 294-295
J
RAM and, 295
Jamaa, Elie, 16 ramp attributes, 295, 296-297
Johnson, Rebecca, 119 rigs,319-322,321-322
Johnston, Ollie, 89, 90 shadows
depth map shadows, 304-306,
304-306
K directional light with, 307, 307-308
Keller, Phil, 228 dissipating, 308, 309
keyframe animation. See motion capture overview of, 304
kissNwave example. See motion capture raytrace rendering of, 298, 304,
Kundert-Gibbs, John, 119, 126, 195, 264 308-309, 309
Kunzendorf, Eric, 3 scanline rendering of, 308, 309
texture, mapping to attributes of, 295-297,
296-297
L types
Latham, William, 51 ambient light, 297-298, 297-298
lattice, 27-28, 27-29 area light, 299-300, 300
Layer Editor, 93, 94, 97, 97 directional light, 298
layouts in GUIs, 269-270 point light, 298
A League of Their Own film, 4 spotlight, 298-299, 299
Lee, Peter, 126, 264 lip-synched animations, 119-163
lighting, 293-323 adding personality to vocal tracks, 127-128
Ambient Shade attribute, 297-298, 298 creating vocal clips, 124-126, 125
color attribute, 295-297, 296-297 creating vocal tracks, 126-127, 126
creating fine-tuning timing, 127
bounce light, 315-318, 315, 317-318 head replacement techniques, 149
diffuse lamp light, 309-312, 310-311, humanoid cartoon characters
314-315,314-315 adding eye/forehead movement,
filllight,318-319,319 137-138, 138
harsh lamp light, 312-314, 312-314 adding target shapes to blend shapes,
intensity curves, 300-303, 301-303 136-137, 137
moonlight, 307, 307-30S attaching facial surfaces, 140, 141
for moving characters, 306, 320-322, base shapes, 135, 137
321-322 controlling blend shapes, 135
reflections, 322-323, 323 controlling face as whole, 140-142,
decay rate attribute, 296-297, 297, 141
299-300, 300 creating blend shapes, 124, 124,
design, 293-294 133-140, 134-135, 137-139
duplicating, 301, 302 creating blend shapes using groups,
Emit Diffuse/Emit Specular attributes, 296, 137,139-140
296 creating character sets, 142, 142
HyperShade tool, 302, 302-303, 310, creating happiness/surprise, 138-139,
310-311 139
intensity attribute, 295, 296 creating head/face shapes, 129-130,
Light Linking, 300, 320 130
overview of, 293, 323 creating lip-synch libraries, 123-124,
passes, 294 142-143
360 Index
Maya 4.5 Savvy (Kundert-Gibbs and Lee), 264 converting NURBS to, 18, 37-38, 38
Maya converting to SubDivision surfaces,
ASCII (*.ma) files, 174-175, 175, 188-189 40^2, 43, 45-46, 46
binary(*.mb)files,174,189 defined,9, 10
versus custom software, 247 Paint Effects workaround, 66-68,
shortcomings, 78-79 66-69
Maya Dispatcher. See distributed rendering texturing, 18-19, 18-19
MEL (Maya Embedded Language), See also texturing NURBS versus, 16-18, 77
GUI creation using SubDivision surfaces, See also
overview of, 263-264 SubDivision surfaces
Snap tool, 115 defined,9-10, 10
Socks and, 337-338 hierarchy feature, 39
sourcing, 115 horse example, 35-49, 36-49
model walking examples. See motion capture overview of, 35
modeling, See also animation; organix modeling using trims and blends, 8-9, 10
crowd scene characters, 172-173, 173 Monsieur Cinnamon example. See lip-synched
in lip-synched animation, 122, 122, 150-151 animations
using NURBS, See also NURBS moonlight, 307-308, 307
creating basic shapes, 35-37, 36-37 More Options dialog box, 342, 342-343
defined, 8 motion capture, 85-117
flour sack example, 10-16, 10-16 cleaning
overview of, 3, 8 blending with keyframe animation,
using polygons 93-94,94
362 Index
© 2001 FFFP
defined, 91-92 overview of, 106, 114
using filters, 92-93, 92 set keys for animation range, 109, 110
marker noise, 92-93, 92 set keys for kissNwave animation,
marker occlusion/flipping, 93 109, 111-112
over-cleaning, 93 shifting start time of kissNwave clip,
twisted joints, 93 112-113
defined, 85-86 workflow steps in, 107
in feature films, listed, 87 magnetic systems for, 86
in Final Fantasy, 86, 87, 90-91, 91 model walking examples
human actors and, 87-88 adding kiss and wave, 106-114,
using with keyframe animation 108-113
bycombining, 115-117, 115-116 over bump in terrain, 94-106, 96-106
by layering, 106-114, 108-113 tripping over box, 115-117, 115-116
by offsetting, 94-106, 96-706 offsetting keyframe animation over
overview of, 85, 86, 90 adjusting f-curves, 99, 100
layering keyframe animation over with Trax adjusting "skin fit", 95
Editor animating offset controllers, 98-99,
adjusting f-curves, 113, 113 700
animating disabled clips, 107-108, animating walking over bumps,
113-114 100-104, 101-106
animating enabled clips, 107, displaying controls, 96-98, 97-9S
109-112, 7 7 0 , 114 loading scene file, 95
creating character sets/clips, 107 locking rotation/scale values, 98, 99
creating kissNwave clip, 112, 112 overview of, 94-95, 105-106
locking non-animation attributes, setting preferences, 95-96, 96
108-109,?08 optical systems for, 86
Index 363
Sakaguchi, Hironobu, 90
scale values, locking, 98, 99
scanline rendering, 308, 309
Scientific American magazine, 239
Scott, Remington, 85
scripts, See also GUI creation; MEL
architecture, 175-178
Attribute Collection script, 26
on CD-ROM, trying out, 285
running, 179
scriptJobs, 280, 287
scriptwriting, 3, 4-6
Set Driven Key command, 9
setkeys,109,770, 111-112
setting up for animation, See also animation
arm creation example
bind arm mesh to skeleton, 26, 27
correct wrist collapse, 27-29, 27-28
R create arm IK and Point constrain to
hand, 25, 25
R & LankleControl, 31, 32
create biceps muscle, 29-30, 30
R8cLhandControl,31,32
finish arm bone setup, 23-25, 23-24
radio buttons, 273-274, 274, 280
overview of, 22-23
ramp attributes of light, 295, 296-297
place controls, 30-32, 31
Rasche, Karl, 332, 333
to carry story forward, 21-22, 21
raytrace rendering, 298, 304, 308-309, 309
drawings and, 20, 20-21
Rebuild Surface Options dialog box, 11, 11, 13,
having/sticking to plans, 20
23
in lip-synching, 151-153, 151-152
reflections, creating, 322-323, 323
overview of, 3, 19-20, 33
relative icon positioning, 276-277
perfection vs. good enough, 22
render farms, 325
for speed, 22
rendering, See also distributed rendering
shader trees/colors, 173-174, \ 74
lip-synched animations, 162, 762
shadows. See lighting
particle simulations, 254-257, 254-258
shattering glass example. See dynamic rigid
raytrace rendering, 298, 304, 308-309, 309
body simulations
scanline rendering, 308, 309
ShoulderControl, 31, 32
RGBcolor, 180, 180
Sims, Karl, 79, 80
Richards, Jacob, 325
simulations. See dynamic
rigging lip-synched animations, 122-124,
single skin objects, 8
123-724
skinning methods, choosing, 20
rigid bodies. See dynamic rigid body simulations
Smith, MarkJennings, 51
Ritlop, Frank, 293
SMTP options in Socks, 339, 341-342
rotation values, locking, 98, 99
Snow White film, 89
rotocapture, 89
Socks. See distributed rendering
rotoscoping, 89
sound tracks. See lip-synched animations
spline controllers, 242-243, 243
Index 365
Habib Zargarpour of Industrial Light and Magic is Susanne Werth of Childrens TV in Germany steps you
awash in the complexity of particles as he shares his through creating a template GUI to control animation
experience creating the realistic wave mist from The of nearly any character you're likely to run into.
Perfect Storm. Page 223 Page 263
Frank Ritlop ofWeta Digital illuminates the delicate Timothy A. Davis and Jacob Richards of Clemson
task of lighting with a thorough breakdown of the lay- University provide their Windows-based distributed
ers of lights you need to bring reality to your scene. renderer on the CD and then show you how to use it
Page 293 to improve your own rendering. Page 325