0% found this document useful (0 votes)
57 views19 pages

Virtual Clothes, Hair and Skin For Beautiful Top Models

Uploaded by

ankit antala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views19 pages

Virtual Clothes, Hair and Skin For Beautiful Top Models

Uploaded by

ankit antala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

MIRALab Copyright © Information 1998

Virtual Clothes, Hair and Skin for Beautiful


Top Models
Nadia Magnenat Thalmann, Stephane Carion, Martin Courchesne
Pascal Volino, Yin Wu
MIRAlab, University of Geneva

Abstract
Since l986, we have led extensive research on simulating realistic looking humans. We have
created Marilyn Monroe and Humphrey Bogart that met in a Cafe in Montreal. At that time, they
did not wear any dress as such. Humphrey's body was made out of a plaster model that has the
shape of a suit. Colors on Marilyn's body looked like a dress. Hairs were simulated as a global
shape and skin was a color.
Since then, we have developed extensive research to simulate real virtual deformable clothes
wearing by virtual humans. We also needed to have appropriate simulation of a skin and
recently, we have developed new research on skin in order to decrease the plastic color of our
synthetic actors. Also there was a need to simulate hair in an efficient way. New methods have
been developed, both for design and animation purpose that are compatible with the clothes
module. In this paper, we like to introduce our most recent research results on these topics. We
are now able to simulate top models that start to look like real ones. Our latest work, that shows
Marilyn receiving a golden camera Award in Berlin, Germany , demonstrates the results of our
research. This sequence has been shown in a ZDF television program that was seen by more than
l5 millions of viewers.

I - Cloth modeling and simulation


State of art in cloth synthesis and animation
The evolution of cloth synthesis has gone though several phases. The first one, and the most
important, has been to simulate efficiently fabric motion and deformations using mechanical
computer simulation. These techniques have been pioneered by Weil [C23], Terzopoulos et al.
(1987) and Haumann and Parent (1988), using different kinds of mechanical models based on
triangular discretisation of the simulated surface. More recently, several other techniques have
been explored, such as polynomial surfaces (Witkin and Welch [C25], Baraff and Witkin [C3])
and particle systems (Breen et al. [C4]). The second phase was to focus on some specific
behaviors of cloth deformation, such as wrinkle generation. Kunii and Godota [C10] and Aono
[C1] produced wrinkles by combining geometric and physical aspects of the deformations.
The final aspect of cloth generation was the design and animation of garments on virtual actors.
This aspect includes several issues such as garment modeling, building, and also precise
handling of the interaction between the cloth and the body that wears it, using collision detection
and response. The first studies have been performed by Lafleur et al. [C11], Carignan et al. [C6]
and Yang and Magnenat Thalmann [C27] where complex garments were generated by
assembling panels around a synthetic actor, which then was animated.
More than just pieces of fabric hanging in the air or draping around a table, realistic cloth
simulation provide a variety of new problems and challenges, such as being able to design any
shape of cloth, such as a real tailor would cut into fabric pieces and assemble them without being
constrained by limitations due for example to the data structure (regular triangle meshes, square
panels). Then, the cloth behavior is strongly determined by the collisions between the fabric and
the body, and also the self-collision within the cloth itself, usually in wrinkles in crumpling
situations. Collision detection should be very efficiently managed, particularly in the case of self-
collision detection. The mechanical model should also be very efficient for simulating high
deformations, such as those occurring during wrinkling. This requires good modelisation of the
non linear mechanical behavior of the fabric [C7], as well as an efficient calculation algorithm
which gives acceptable results even if the deformations gets big compared to the discretisation
size of the surface, and then ensures stability even with high extension and curvature.

Figure 1: A sequence involving animation of synthetic cloth


Concerning the design itself, a good cloth software should provide efficient tools for designing a
cloth, and fitting its shape exactly to the synthetic actor.

Simulating cloth with high deformations


One of the major difficulties of cloth simulation is to cope with all the high deformations and
wrinkle situations that real cloth may have when worn by a body. Typically, the model should be
robust enough to ensure realistic animation even in cases where high deformations have to be
considered compared to the discretisation size of the surface, and despite the lot of collisions
created by crumpling, wrinkling, or simply by contact with the body.

The main idea of the mechanical model is to integrate Newton's motion equation in a direct way
to keep quickly evaluated time steps small and maintain very frequent collision detection [C20].
More sophisticated and time consuming models based on global minimizations or Lagrangian
dynamics formulations allowing a higher time step would be a waste of time. Thus,
discontinuous responses such as collisions will be handled in an accurate way. Furthermore, this
direct formulation allows us easy and precise inclusion of any nonlinear mechanical behavior.
With such a model, we can also act directly on the position and speed of the elements, and thus
avoid handling collisions through strong repulsive forces that perturb the simulation.
The animated deformable object is represented as a particle system by sets of vertices forming
irregular triangles, thus allowing surfaces of any shape to be easily modeled and simulated.
Using the material mechanical behavior parameters of the considered material, strains are
computed within the considered triangles and the resulting forces are applied on the vertices.
The object is considered to be isotropic and of constant thickness. Elastic properties of the object
are mainly described by the standard parameters such as the Young modulus, the Poisson
coefficient, the density and the thickness.
Rough discretisation, however, alters the behavior of the surface. In particular, heterogeneous
triangulations are "rigidifying" the whole surface, preventing easy buckling. These effects have
to be corrected through tuning and adjustments of the mechanical parameters. In particular,
textile easily buckles into double curvature, but buckle formation requires a change of area that
increases with the size of the discretised elements. To facilitate buckle formation on roughly
discretised objects without loosing textile stretching stiffness, we use a variable Young modulus
for reducing the stretching stiffness for compression and small extension.

Collision management
Collisions are widely spread within all the simulated cloth, either being in contact with the body,
or through contact between the wrinkles of the cloth. Thus, an efficient way of handling these
numerous collisions is required.
Instead of considering collisions as being dynamic "potential walls" repelling the considered
objects using high and discontinuous forces, thus requiring very small time steps for precise
computation, collisions are rather taken into account in a separate step as a geometrical and
cinematical constraint resolution independent from the mechanical computation resulting from
"continuous" forces, such as elasticity, and gravity and wind, as described above. The
computation time step is in this way not altered by collisions, and the computation remains
efficient despite a huge number of collisions.

As soon as two elements collide, momentum transfer is performed according to the mechanical
conservation laws, and taking into account bouncing and friction effects. All the collisions are
processed independently in this way, during one single and common time step. Whenever
elements are involved in several collisions, such as multilayer cloth, collision response is
performed iteratively until all the collision constraints are resolved. This technique also allows
propagation of the collision effect through the different layers of a cloth stack. Compressible
viscoplasticity has been added to the collision response model, not only for simulating accurately
multilayer compressibility, but also for ensuring good stability of the model in these situations.

Figure 2: Dressed actors and multilayer cloth.

Collision detection
Collision and particularly self-collision detection is often the bottleneck of simulation
applications in terms of calculation time, because of the scene complexity that involves a huge
number of geometrical tests for determining which elements are colliding.
Depending of the way the cloth object is represented, different techniques have been developped
for solving efficiently the collision detection problem, using methods based on space subdivision
such as voxelisation or octree [C11][C26] or hierarchisation [C22], rasterisation [C16], shortest
distance tracking [C12], or mathematical techniques suited to curved parametrical surfaces
[C2][C8][C17][C18]. In our case, the problem is complicated further because we are handling
discretised surfaces that may contain thousands of polygons. Considering the clothing problem
where garments are widely in contact with the body, collisions are not sparse at all and should be
detected efficiently and accurately. Furthermore, because of all wrinkles and possible contacts
between different parts of the same cloth, we have to efficiently detect self-collisions within the
surfaces. This prevents the use of standard bounding box algorithms because potentially
colliding regions of a surface are always touching each other by adjacency.
We have developed a very efficient algorithm for handling this situation. This algorithm is based
on hierarchisation and takes advantage of the adjacency which, combined with a surface
curvature criteria, lets us skip large regular regions from the self-collision detection. We then get
a collision evaluation time that is roughly proportional to the number of colliding elements, and
independent of the total number of elements that compose our deforming surfaces.

Figure 3: Hierarchical collision and self-collision detection using surface curvature.


This algorithm is very general and is able to deal efficiently with crumpling situations. However,
the cloth often is supported by the body or other underlying cloth by contact regions involving
big surfaces, and displacement between each frame remains quite small. Efficient optimizations
for speed and robustness may be included in the detection algorithm. The main optimizations
concern remnant collisions, that are kept in memory even if they are not detected for a given
number of frames, in order to ensure correct tracking of their orientations. This is linked to an
incremental collision detection scheme, which tracks displacements and surface proximities,
updating existing collisions between the frames and detecting new neighboring collisions. The
most important feature is also orientation consistency checking and correction, which identifies
collision regions and performs a statistical evaluation in order to correct wrongly oriented
collisions. These solutions have been experimented successfully in difficult situations involving
crumpling cloth [C20][C21], and they handle in a robust way situations involving multilayer
garments.

Tools for garment design


The 2D panel approach has proven to be an efficient technique for designing garments
[C24][C27], as it is closely related to the techniques used in real cloth industry. An interface was
create to provide an easy and user friendly environment for performing this task, and allowing
the users to build a complete complex scene interactively.
This interface provides a wide number of tools, such as:

a- A main panel to control the objects in the scene.


b- A material and texture editor to put on the cloth complex color and material design, and to
give to the objects a more realistic aspect.

c- A geometric panel allowing precise editing of the shape of the cloth, and modify the size and
position in order to fit them exactely to the body.
d- A display panel to modify the way the scene is rendered to the screen, and allow different
visualisations depending of the context (how the objects are displayed and what kind of elements
are made visible, position of the lights, background,...)

Figure 4: A view of the interface, displaying how a garment is adjusted interactively to a body
by removing a piece of fabric and seaming.
For the construction of clothes, some others specifics tools were added: With the parameters
panel, the users can interactively change the physical parameters of the objects to simulate
special pieces of fabrics (wool, cotton, jeans,...) and add external forces (like gravity, wind,...).
Specifics tools were also created to perform the seaming, and others tools allow the user to
manipulate the triangles separately in the 3D scene to adjust the clothes and to fit them precisely
around the body.

II - Hair design, simulation and rendering


Hair simulation involves two important tasks: hair animation, and hair rendering. We describe in
the following sections the two models we developed to perform these tasks.
Hair animation
To generate natural hair animation, physical simulation must be applied. However, precise
simulation including collision response is impractical because of the number of individual hairs.
Previous work [H1][H4][H10], did not consider any possible simplifications of the simulation
and tended to animate each hair individually.

Figure 5: Hair deformation reduced to the deformation of a tube.


Our simulation model takes benefit from two properties of the hairs, they are, on one side, low
deformation of the initial shape, and on the other side, weak variation of the shape from one hair
to another. The resulting model consists of reducing hair's deformations to those of a tube. Each
hair strand is parametrically defined inside the tube, so that it follows its deformations (figure 5).
Physical simulation is performed on a set of such tubes, defined as the hair's core. Tubes are then
interpolated over the support according to the hair density.

The hair's core is described using a spline that is interpolated from a few animated vertices
linked to each other by edges. The final hair's shape is computed from that spline (figure 6). A
spline is extracted from the position of the vertices of the hair's core. The representation is
achieved by computing a number of coordinates which depend on the complexity of the shape.
The coordinates are then considered as control points and the hair is represented as a NURB.

Figure 6: Progression from the hair's core formed with animated edges and vertices, to the spline
defined with interpolated control points, to the hair itself.
Hairs do not exist by themselves. Each hair strand extremity must be attached to a vertex of the
support object: the hair's root. An orientation matrix is associated to the hair's root. The matrix is
unique for the whole hair if the support is rigid, or different for each root otherwise. The
orientation matrix allows us to perform hair animation on any type of support, and simulate for
example a fur coat.

The physical simulation requires computation of internal forces. They are computed from the
basic hair core. Bending and tortional forces are respectively computed from the angles between
two successive edges and binormal as shown in figure 7.

Figure 7: The curvature at vertex vi is computed using the angle between the edges' normal ni-1
and ni, the binormal being a unit vector parallel to their cross product.

As the Young modulus for the hair core is unknown and impossible to explicitly evaluate, it is
empirically defined so that satisfying visual results are obtained.

Figure 8: Animating hairs.


Once the location of the basic hair's core have been computed using the midpoint method, the
hair's core is interpolated according to the hair density over the support surface. A basic hair's
core is associated to each vertex of the support. The hair is recursively interpolated by levels,
starting from the basic core as level 0. As shown in figure 4, each triangle of the support is
subdivided into 4 triangles by splitting each edge, then joining the 3 inserted points. Each vertex
created by edge splitting is the root of an interpolated hair's core. A hair's control points are
interpolated from the control points of the 4 hairs associated to the four points of the two triangle
sharing the splitted edge. The support surface is not effectively subdivided, only subdivided
vertices that will be used as hair root are added.

Figure 9: Hair's interpolation mechanism. From the head's vertices is computed the first level
which is used to compute the second one. The interpolation of the hair indicated with the arrow
is computed using control points of the 4 represented head's vertices.
Figure 10: Interpolating curled hair.

Collision detection
In order to properly achieve hair animation, the model must include a collision detection and
treatment process. Our model takes into account the two types of collision involved:

a- Collision between hair strands: the result of these collisions is to give volume to the hair.
Instead of detecting collision, a repulsion force is introduced when two strands are closer than a
proximity factor. That factor depends on hair density and shape. Friction forces are also
introduced in order to avoid colliding strands from slipping against each other, and to attract each
others. Collision treatment is restricted to neighbor hair's cores.

b- Collision between hairs and support: the goal is to avoid hair strand penetration inside the hair
support object. Collision treatment is also restricted to the hair's core. A path, corresponding to
the nearest points between a strand's vertices and support object is tracked onto the object. The
method is efficient when the shape is not too irregular. Friction and repulsion forces can then be
introduced in the physical simulation.

Hair rendering
In the field of human simulation, hair presents one of the most challenging problems. The
difficulties of processing hair result from the large number and detailed geometries of the
individual hairs, the complex interaction of light and shadow among the hairs, and the small
scale of one hair's width in comparison to the rendered image.

The rendering of hair therefore constitutes a considerable anti-aliasing problem in which many
individual hairs, reflecting light and casting shadows on each other, contribute to the shading of
each pixel. Several researchers have published [H2][H6][H7][H3][H12] methods for rendering
human hair, or the more limited problem of rendering fur.
Rendering an image of hair with our system involves several steps:

* creating a database of hair segments


* creating shadow buffers from all lights

* rendering the hairless objects using all shadow buffers

* composing the hair on the hairless image


The simulation module produces a data base of hair segments for each frame, that are used as
input for the rendering module.
In our system, hair rendering is done by raytracing using a modified version of the public domain
Rayshade program. An implementation module of the shadow buffer algorithm [H8][H9][H13]
has been added to the public-domain RAYSHADE program, based on an earlier version of hair
rendering based on pixel blending [H5]. It works fine for calculating shadows of normal objects
and is also used to figure out the shadows cast by hair.
In the hair rendering module, the process is step by step. First, the shadow of the scene is
calculated for each light source i, as well as for the light sources for the hair shadows. The hair
shadows are calculated for the object surface and individually for each hair. Finally the hair style
is blended into the scene, using all shadow buffers. The result is an image with a three-
dimensional realistic hair style rendering where complex shadow interaction and highlight
effects can be seen and appreciated. In more detail, the full rendering pipeline may be
summarized as follows:
a- We take the scene model description and project it onto each light source, creating one scene
shadow buffer for each light source.
b- We take the hair model and project it onto each light source, creating one hair shadow buffer
for each light source. This is done by drawing each hair segment into a Z-buffer based frame
buffer and extracting the resulting depth map.
c- We compose the depth maps for the scene shadow buffer and the hair shadow buffer, resulting
in a single composite shadow buffer for each light source.
d- We generate the scene image and its Z-buffer, using the scene model description and the
composite shadow buffers as input to the scene renderer, resulting in a fully rendered scene with
hair shadows, but no hair.
e- We blend the hair segments into the scene image, using the scene's Z-buffer to determine
visibility and the composite shadow buffers to determine shadowing, yielding the final image
with hair and full shadows. For this blending process, each strand of the hair model is breaking
into straight 3D line segments. The intensity H of each of the segment's endpoints is determined
using a hair intensity equation.
The raytracer basically has two roles in the hair rendering pipeline. The first is to construct
shadow buffers from light sources, and the second is to render hairless objects with full
shadowing (coming from the hair and the hairless objects).
There are in fact two ways of creating shadow buffers. The first is by rendering objects with the
graphics hardware and reading the contents of the z-buffer. The second is by using the raytracer
and rendering the scene from the position of the light, asking it to reduce a z-buffer image of the
scene. After the shadow buffers have been created, the hairless objects have been rendered with
full shadowing, and a z-buffer from the camera position has been calculated, the hairs may be
added to the scene.

Figure 11
Figure 11: From top left to bottom right shows the results of the steps involved in the rendering
process. Regarding shadow buffers for each light source, first, computation of shadow buffer for
the hairless scene, then computation of the shadow buffer for the hair, and finaly, composition of
both shadow buffers. Regarding the rendering, first, computation of the rendered hairless scene
with hair shadows, then computation of the rendered hair, then blending of the hair over the
hairless scene.

III - Natural Human Skin with Detailed Structures


The modeling and rendering of skin is very important for the realism of human animation
because skin is the outer layer of the whole body. Human eye is sensitive to skin, especially on
the face and the regions with special surface features. Using conventional rendering techniques
such as continuous polygon shading, the human character image appears plastic and cartoon like
with a too-smooth surface. Texture mapping is the most popular technique to get a better
rendering image. Texture information may be obtained with a laser scanner or from photographs
[S1][S2]. Efforts have been made to manipulate a 3D model of a face in conjunction with texture
mapping and coloration, which produces a realistic facial image with the surface details, skin
grain and other skin features [S3]. However, this information carries the surface details of the
person whose photograph is used. The rendered image inherits side effects from the original
photograph such as the lighting conditions, which may have already been changed. Furthermore,
it can not be adjusted to the same human model at a different age.
Some efforts have been made to model human static skin surface details [S4][S5]. There are also
a few dynamic models which try to simulate details like wrinkles during skin deformation
[S6][S7][S8]. However, the present models do not generate the skin surface combined with the
static and deformation details.
This prompts us to closely study the real skin structure and to model synthetic skin with its
detailed structures with animation. In our model, we present an approach which considers micro
and macro structures of the skin surface geometry, and simulates the wrinkle dynamics using a
biomechanical skin model. Bump texture mapping is used for skin rendering.

Skin Properties
Skin has a three layered structure: The epidermis, dermis and hypodermis. The skin surface
consists of a geometrical structure which varies at different parts of the body due to the different
combined effects of these layers. However, a close look at the skin surface shows that there are
basically two kinds of structures which influence the general appearance: a micro structure with
well defined geometrical shape and form similar for most parts of the body, and a macro
structure consisting of the distinct visible lines, patterns, creases, wrinkles (expressive wrinkles
and wrinkles due to age) and folds specific to one part. Both structures change with time. The
micro structure becomes rough with age, the distinctive lines appear deeper or more pronounced,
and the expressive wrinkles which are once only visible during facial animation remain on the
face permanently.
These skin properties suggest building a skin surface model considering the micro and macro
structures and taking into account their changes with region and age.

General Human Skin with Micro Structure


Micro lines appear to be a simple basic pattern in most skin regions though they have various
patterns and direction preferences at different body sites. The micro structure of the skin surface
resembles a pattern with a layered net-like structure. This structure consists of polygonal forms,
most often triangles [S9]. The edges of the triangular forms define the location of furrows, and
the curved surface surrounded by furrows define the ridges.
We use Delaunay triangulation [S10] to generate the furrow pattern for each skin region. A
variety of triangle mesh can be produced imposing different edge size or angle constraints, so it
offers the facility to produce for each skin region its corresponding micro patterns. A ridge
surface is defined as a function on the triangle base, whose height increases while going from the
edge to the center of the triangle. By changing this shape function, sharper or flatter ridges can
be obtained. This characteristics can allow us to model skin of different people and people with
different ages.

To present a hierarchical structure (small ridges inside a small ridge), the Delaunay triangulation
process is repeated until the required level. At the same time, the shape function is recursively
applied [S11].

In order to simulate the skin change in a micro structure, a shape function causing sharper bulge
can be applied to the same triangular form mesh of the same model, which produces a rougher
skin surface compared to young skin.

Figure 12 shows a synthetic skin texture image produced for facial skin rendering. It is a two
layered structure with the shape function whose value is a square root of the product of three
triangle barycentric coordinates. Figure 13 shows a synthetic actress's face with bump mapping
rendering with the synthetic skin images. The same texture image is used for both the facial skin
and the lips but with different mapping parameters, which make the lip surface appear rougher
than the other facial skin.

Figure 12: Two layered synthetic skin image.

Figure 13: An actress's face with bump texture mapping.

Human Skin at Special Regions with Macro Structure


The macro structure characterize distinctive features of a body region of the body. In addition to
micro furrows all over the body, macro furrows such as palms lines and potential flexure lines of
wrinkles exist in some skin regions. The macro lines are deeper and wider than the micro lines.
In order to define a macro structure for a specific skin region, an array of points on the 3D skin
surface is defined interactively to locate the position of a distinctive macro furrow. A B-spline
curve is then obtained by interpolation or approximation through the defined points. This 3D
curve is mapped to 2D texture image using cylindrical or planar projection.
In the 2D texture image, the macro structure is represented as a quadrilateral mesh along the
furrow line with a defined band size. Along the bulge direction, a neighboring relation is built
among the points. The bulge shape function is applied to the distance between a point and the
center point in its neighboring set. Various bulge functions are defined according to the macro
shape features. In addition, different types of band size such as uniform, random, increasing and
decreasing band size are also specified for macro lines. The macro structure model offers a
mechanism to represent geometric shape of static lines in skin as well as expressive wrinkles
[S11].

Here we present the process of modeling the palm skin. Firstly, the upper transverse line, the
lower transverse line, the thenar line and the other 14 finger creases are defined interactively in a
3D hand model. The bulge shape function, the band size type and its value are specified for each
line. The first three palm lines are specified with a decreasing band size type as their line width
decrease from one end to anther. The other finger creases are defined as uniform band size type.
In addition, a cluster of fine lines is also used to briefly indicate the papillary ridges on the palm.
They are very shallow lines on the palm. All lines are projected onto the 2D texture images.
Figure 14 shows a synthetic palm image. With the bump mapping technique, a more realistic
hand image is obtained (Figure 15).

Figure 14: Palm image with feature lines.


Figure 15: Palm with bump mapping rendering.

Wrinkle Generation
In our model, we consider the process of expressive wrinkle generation in addition to the
modeling of static skin surface structure. First, potential wrinkle lines are defined as curves
transverse to muscle fibers before skin deformation. For skin deformation, we use a
biomechanical model which considers the skin as a biaxial, imcompressive material with plane
stress elastic behavior. During the animation, the wrinkle generation model obtains the principal
strain of the deformed skin surface from the biomechanical model. The principal strain of skin is
then used to dynamically control the shape and form of expressive wrinkles in a texture image.
The final rendered facial expression sequence with wrinkles is achieved by bump texture
mapping.
In this section, we has presented a skin surface model with micro and macro structures which can
be used to enhance the natural looking of skin. The skin geometric model is flexible, changing
with region and age, and the deformation model supplies information for the wrinkle generation.

Aknowledgements
We would like to thank Karin Blanch, Mehdi Davary, Sylvie Holowaty and Jean-Claude
Moussaly for their help in producing the pictures. We would like to thank the Swiss National
Research Foundation for their constant support that makes this fundamental interdisciplinary
work possible.

Bibliography
References for the cloth section
[C1] Aono M. (1990), "A Wrinkle Propagation Model for Cloth", Proc. Computer Graphics
International, Springer-Verlag, 1990, pp.96-115.
[C2] Baraff D. (1990), "Curved Surfaces and Coherence for Non-Penetrating Rigid Body
Simulation", Computer Graphics, 24(4), pp 19-28.
[C3] Baraff D., Witkin A. (1992), "Dynamic Simulation of Non-Penetrating Flexible Bodies",
Computer Graphics, 26(2), pp 303-308.

[C4] Breen D.E., House D.H., Wozny M.J. (1994) "Predicting the Drape of Woven Cloth using
Interacting Particles", Proc. SIGGRAPH'94, Computer Graphics, 28(4), pp 365-372.
[C5] Canny J.F., Manocha D. (1991), "A new approach for Surface Intersection", International
journal of Computational Geometry and Applications, 1(4), pp 491-516.
[C6] Carignan M., Yang Y., Magnenat Thalmann N., Thalmann D. (1992) "Dressing Animated
Synthetic Actors with Complex Deformable Clothes", Proc. SIGGRAPH'92, Computer
Graphics, 26(2), pp 99-104.
[C7] Denby E.F. (1976), "The Deformation of Fabrics during Wrinkling - A Theoretical
Approach", Textile Reserch Journal, Landcaster PA., 46, pp 667-670.

[C8] Duff T. (1992), "Interval Arithmetic and Recursive Subdivision for Implicit Functions and
Constructive Solid Geometry", Computer Graphics 26(2), pp 131-138.
[C9] Hinds B.K., McCartney J. (1990), "Interactive garment design", The Visual Computer, Vol.
6, pp.53-61.
[C10] Kunii T.L., Gotoda H. (1990) "Modeling and Animation of Garment Wrinkle Formation
processes", Proc. Computer Animation `90, Springer-Verlag, pp 131-146.

[C11] Lafleur B., Thalmann N.M., Thalmann D. (1991), "Cloth Animation with Self-Collision
Detection", Proc. of the IFIP conference on Modeling in Computer Graphics, pp 179-187.
[C12] Lin M.C., Manocha D. (1993), "Interference Detection between Curved Objects for
Computer Animation", Models and techniques in Computer Animation (C.A. proceedings 1993),
pp 43-55.

[C13] Magnenat Thalmann N., Yang Y. (1991) "Techniques for Cloth Animation", New Trends
in Animation and Visualization, edited by N. Thalmann, D. Thalmann, by John Wiley & Sons
Ltd., pp.243-256.

[C14] Moore M., Wilhelms J. (1988), "Collision Detection and Response for Computer
Animation", Computer Graphics, 22(4), pp 289-298.
[C15] Morton W.E., Hearle J.W.S. (1962) "Physical properties of textile fibers", Manchester and
London, The textile institute, Butterworths.
[C16] Shinya M., Forgue M.C. (1991), "Interference Detection through Rasterisation", The
journal of Visualisation and Computer Animation, 4(2), pp 132-134.

[C17] Snyder J.M., Woodbury A.R., Fleisher K., Currin B., Barr A.H. (1993), "Interval Methods
for Multi-Point Collisions between Time-Dependent Curved Surfaces", Computer Graphics
annual series, pp 321-334.

[C18] Von Herzen B., Barr A.H., Zatz H.R. (1990), "Geometric Collisions for Time-Dependent
Parametric Surfaces", Computer Graphics, 24(4), pp 39-48.
[C19] Volino P., Magnenat Thalmann N. (1994), "Efficient Self-Collision Detection on
Smoothly Discretised Surface Animations using Geometrical Shape Regularity", Computer
Graphics Forum (EuroGraphics Proc.), 13(3), pp 155-166.

[C20] Volino P., Courchesne M., Magnenat Thalmann M. (1995), "Versatile and Efficient
Techniques for Simulating Cloth and Other Deformable Objects", Proc. SIGGRAPH '95, Los
Angles, pp. 137-144.

[C21] Volino P., Magnenat Thalmann N. (1995), "Collision and Self-Collision Detection:
Efficient and Robust Solutions for Highly Deformable Surfaces", Eurographics Workshop on
Animation and Simulation (to appear).
[C22] Webb R.C., Gigante M.A. (1992), "Using Dynamic Bounding Volume Hierarchies to
improve Efficiency of Rigid Body Simulations", Communicating with Virtual Worlds, (CGI
proceedings 1992), pp 825-841.
[C23] Weil J. (1986), "The synthesis of Cloth Objects", Proc. SIGGRAPH'86, Computer
Graphics, 4, pp 49-54.

[C24] Werner H.M., Magnenat Thalmann N., Thalmann D. (1993) "User Interface for Fashion
Design", Graphics Design and Visualisation, IFIP Trans. North Holland, pp 197-204.
[C25] Witkin A., Welch W. (1990), "Fast Animation and Control of Non-Rigid Structures",
Proc. SIGGRAPH'90, Computer Graphics, 24, pp 243-252.
[C26] Yang Y., Magnenat Thalmann N. (1993), "An Improved Algorithm for Collision
Detection in Cloth Animation with Human Body", Computer Graphics and Applications (Pacific
Graphics proceedings), 1, pp 237-251.
[C27] Yang Y., N.Magnenat-Thalmann, D.Thalmann (1992) "3D Garment Design and
Animation-- A New Design Tool For The Garment Industry", Computers in Industry, 19,
pp.185-191.
[C28] Zyda M. , Pratt D., Osborne W., Monahan J. (1993), "Real-Time Collision Detection and
Response", The journal of visualisation and Computer Animation, 4(1), pp 13-24.

References for the hair section


[H1] Anjyo K., Usami Y., Kurihara T. (1992) "A Simple Method for Extracting the Natural
Beauty of Hair", Computer Graphics, Vol. 26(2), pp. 111-120.

[H2] Csuri C., Hakathorn R., Parent R., Carlson W., Howard M. (1979) "Towards an interactive
high visual complexity animation system", Computer Graphics 13(2) pp. 289-299.

[H3] Kajiya J.T., Kay T.L. (1989) "Rendering Fur with Three Dimensional Textures", Computer
Graphics Vol. 23, No. 3, pp. 271-280.
[H4] Kurihara T., Anjyo K., Thalmann D. (1993) "Hair Animation with Collision Detection", in:
Models and Techniques in Computer Animation, Springer-Verlag, Tokyo, pp.128-138.
[H5] LeBlanc A., Turner R., Thalmann D. (1991) "Rendering Hair using Pixel Blending and
Shadow Buffers", Journal of Visualization and Computer Animation 2(3), pp. 92-97

[H6] Miller G.S.P. (1988) "From Wire-Frame to Furry Animals", Proc. Graphics Interface, pp.
138-146.
[H7] Perlin K., Hoffert (1989) "Hypertexture", Computer Graphics 23(3), 1989, pp. 253-262.
[H8] Reeves W.T., Blau R. (1985) "Approximate and Probabilistic Algorithm for Shading and
Rendering Structured Particle Systems", Computer Graphics 19(3), pp. 313-322.
[H9] Reeves W.T., Salesin D.H., Cook R..L. (1987) "Rendering Antialiased Shadows with Depth
Maps", Computer Graphics, 21(4), pp. 283-291.

[H10] Rosenblum R.E., Carlson W.E,. Tripp III E. (1991) "Simulating the Structure and
Dynamics of Human Hair: Modelling, Rendering and Animation", The Journal of Visualization
and Computer Animation,Vol. 2, No. 4, pp. 141-148.
[H11] Shinya M., Forgue M.C. (1991) "Interference Detection through Rasterisation", Journal of
Visualisation and Computer Animation, J. Wiley & Sons, 4(2), pp 132-134.

[H12] Watanabe Y., Suenaga Y. (1989) "Drawing Human Hair Using Wisp Model", Proc.
Computer Graphics International '89, pp. 691-700.
[H13] Williams L. (1978) "Casting Curved Shadows on Curved Surfaces", Computer Graphics,
12(3), pp. 270-274.

References for the skin section


[S1] Kurihara T. and Arai K. (1991), "A Transformation method for Modeling and Animation of
the Human Face from Photographs," In Ed. N. Magnenat Thalmann and D. Thalmann, Computer
Animation'91, pp. 45-58, Springer-Verlag.
[S2] Williams L. (1990), "Performance-Driven Facial Animation", Computer Graphics, Vol.
24(4), pp. 235-242.
[S3] Kalra P. and Magnenat-Thalmann N. (1993), "Simulation of Facial Skin using Texture
Mapping and Coloration", Ed. S. P. Mudur and S. N. Pattanaik, Proceedings of ICCG'93,
Bombay, in Graphics, Design and Visulization pp. 365-374, North-Holland.
[S4] Nahas M., Huitric H., Rioux M. and Domey J. (1990), "Facial Image synthesis using skin
texture recording", The Visual Computer, 6, pp. 337-343.
[S5] Ishii T., Yasuda T., Yokoi S. and Toriwaki J. (1993), "A Generation Model for Human Skin
Texture", Proc. of CGI '93, pp. 139-150.
[S6] Viaud M., Yahia H. (1992), "Facial Animation with Wrinkles", 3rd Workshop on
Animation, Eurographics'92, Cambridge.

[S7] Terzopoulos D., Waters K. (1990), "Physically-Based Facial Modeling and Animation",
Journal of Visualization and Computer Animation, Vol. 1, pp. 73-80.
[S8] Wu Y., Magnenat Thalmann N. and Thalmann D. (1994), "A Plastic-Visco-Elastic Model
for Wrinkles In Facial Animation And Skin Aging", Proc. Pacific Conference '94, pp. 201-213.
[S9] Marks R. (1983), "Mechanical Properties of the Skin", in Ed. Goldsmith L, Biochemistry
and Physiology of the Skin, Oxford University Press.

[S10] Farin, G. (1990), "Curves and Surfaces for Computer Aided Geometric Design, A Practical
Guide", Academic Press, Second Edition.
[S11] Wu Y., Kalra P. and Magnenat-Thalmann N. (1996), "Simulation of Static and Dynamic
Wrinkles of Skin," Proceedings of Computer Animation'96 (to appear).
[1] Published in Proc. Computer Graphics International `96, IEEE Computer Society, pp. 132-
141.

You might also like