howThingsWork Sig10 PDF
howThingsWork Sig10 PDF
Niloy J. Mitra1,2 Yong-Liang Yang1 Dong-Ming Yan1,3 Wilmot Li4 Maneesh Agrawala5
1 KAUST 2 IIT Delhi 3 Univ. of Hong Kong 4 Adobe Systems 5 Univ. of California, Berkeley
Figure 1: Given a geometric model of a mechanical assembly, we analyze it to infer how the individual parts move and interact with each
other. The relations and motion parameters are encoded as a time-varying interaction graph. Once the driver is indicated by the user, we
compute the motion of the assembly and use it to generate an annotated illustration to depict how the assembly works. We also produce a
corresponding causal chain sequence to help the viewer better mentally animate the motion.
things work visualizations that facilitate these two steps. Figure 3: Typical joints and gear configurations encountered in
Researchers in computer graphics have concentrated on refining mechanical assemblies, and handled in our system. While most
and implementing many of the design guidelines aimed at assisting types we can automatically detect and handle, we require the user
the first step of the comprehension process. Algorithms for creat- to mark some parts, for example a lever (see accompanying video).
ing exploded views [McGuffin et al. 2003; Bruckner and Groller
2006; Li et al. 2008], cutaways [Seligmann and Feiner 1991; Li the steps of the causal chain improve comprehension compared to
et al. 2007; Burns and Finkelstein 2008] and ghosted views [Feiner animations that do not include such cues [Hegarty et al. 2003; Kriz
and Seligmann 1992; Viola et al. 2004] of complex objects apply and Hegarty 2007].
illustrative conventions to emphasize the spatial locations of the
parts with respect to one another. Since our focus in this work is Highlight important key frames of motions. The motions of
on depicting motions and mechanical interactions between parts, most parts in mechanical assemblies are periodic. However, in some
we concentrate the following discussion on visual techniques that of these motions, the angular or linear velocity of a part may change
facilitate the second step of the comprehension process. during a single period. For example, the pistons in the assembly
shown in Figure 12 move up and down the cylinder during a single
Helping Viewers Construct the Causal Chain period of motion. To depict such complex motions, static illustra-
tions sometimes include key frames that show the configuration of
In an influential treatise examining how people predict the behavior parts at the critical instances in time when the angular or linear ve-
of mechanical assemblies from static visualizations, Hegarty [1992] locity of a part changes. Inserting one additional key frame between
found that people reason in a step-by-step manner, starting from each pair of critical instances can help clarify how the parts move
an initial driver part and tracing forward through each subsequent from one critical instance to the next.
part along a causal chain of interactions. At each step, people infer
how the relevant parts move with respect to one another and then
determine the subsequent action(s) in the causal chain. Even though 4 System Overview
all parts may be moving at once in real-world operation of the as-
sembly, people mentally animate the motions of parts one at a time We present an automated system for generating how things work
in causal order. visualizations that incorporate the visual techniques described in
the previous section. The input to our system is a polygonal model
Although animation might seem like a natural approach for visu- of a mechanical assembly that has been partitioned into individual
alizing mechanical motions, in a meta-analysis of previous studies parts. Our system deletes hanging edges and vertices as necessary to
comparing animations to informationally equivalent sequences of make each part 2-manifold. We assume that touching parts are mod-
static visualizations, Tversky et al. [2002] found no benefit for an- eled correctly, with no self-intersections beyond a small tolerance.
imation. Our work does not seek to engage in this debate between As a first step, we perform an automated motion and interaction
static versus animated illustrations. Instead we aim to support both analysis of the model geometry to determine the relevant motion
types of visualizations with our tools. We consider both static and parameters of each part, as well as the causal chain of interactions
animated visualizations in our analysis of design guidelines. between parts. This step requires the user to specify the driver part
Effective how things work illustrations use a number of visual tech- for the assembly and the direction in which the driver moves. Using
niques to help viewers mentally animate an assembly. the results of the analysis, our system allows users to generate a
variety of static and animated visualizations of the input assembly
Use arrows to indicate motions of parts. Many illustrations in- from any viewpoint. The next two sections present our analysis and
clude arrows that indicate how each part in the assembly moves. visualization algorithms in detail.
In addition to conveying the motion of individual parts, such ar-
rows can also help viewers understand the specific functional rela-
tionships between parts [Hegarty 2000; Heiser and Tversky 2006]. 5 Motion and Interaction Analysis
Placing the arrows near contact points between parts that interact
along the causal chain can help viewers better understand the causal The analysis phase computes the type of each part and how the parts
relationships. move and interact with each other within the assembly. We encode
this information as an interaction graph (see Figure 4) in which
Highlight causal chain step-by-step. In both static and animated nodes represent parts, and edges represent mechanical interactions
illustrations, highlighting each step in the causal chain of actions between touching parts. Each node stores parameters that define the
helps viewers mentally animate the assembly by explicitly indicat- type (e.g. axle, gear, rack, etc.) and motion attributes (e.g., axis of
ing the sequence of interactions between parts. Static illustrations rotation or translation, range of possible motion) of the correspond-
often depict the causal chain using a sequence of key frames that ing part. We use shape and symmetry information to infer both the
correspond to the sequence of steps in the chain. Each key frame part type and part motion directly from the 3D model. Each edge in
highlights the transfer of movement between a set of touching parts, the graph is also typed to indicate the kind of mechanical interaction
typically by rendering those parts in a different style from the rest of between the relevant parts. Our system handles all of the common
the assembly. In the context of animated visualizations researchers joint and gear configurations shown in Figure 3, including cam and
have shown that adding signaling cues that sequentially highlight crank mechanisms, axles, belts, and a variety of gear interactions.
sharp edge loops
segments
loop extraction
analyzed assembly parts interaction graph Figure 5: Parts are segmented using edges with high dihedral an-
Figure 4: (Left) Parts of an assembly with automatically detected gles across adjacent faces to form sharp edge loops.
axes of rotation or symmetry, and (right) the corresponding inter-
action graph. Graph edges encode the types of joints. which are present in certain parts like variable-speed gears (see Fig-
ure 3). For rotationally symmetric parts, we obtain their rotational
To construct the interaction graph, we rely on two high-level in- axis a and teeth count based on the order of their symmetry. Simi-
sights: 1) the motions of many mechanical parts are related to their larly, for parts with discrete translational symmetry, we obtain their
geometric properties, including self-similarity, symmetry; 2) the movement direction t and teeth width, e.g., for a gear rack; while
different types of joint and gear configurations are often charac- for helical parts we compute the axis direction t along with the pitch
terized by the specific spatial relationships and geometric attributes of the screw motion.
of the relevant parts. Based on these insights, we propose a two-
stage process for constructing the interaction graph. First, in the Sharp edge loops. Parts with insignificant symmetry often pro-
part analysis stage, we examine each part and compute a set of can- vide important constraints for the other parts, facilitating or restrict-
didate rotation and translation axes based on the self-similarity and ing their movements. Even for (partially) symmetric parts we must
symmetry of the part. We also compute key geometric attributes, in- estimate attributes like gear radius, range of motion, etc. We extract
cluding radius, type of side profile (cylindrical, conical), pitch, teeth such properties by working with sharp edge loops or creases, that
count, as applicable. Complex parts are segmented into simpler sub- are common 1D feature curves characterizing machine parts (see
parts, each with its associated radius, side profile type, etc. Then, in also [Gal et al. 2009]).
the interaction analysis stage, we analyze the axes and attributes We employ a simple strategy to extract sharp edge loops from
of each pair of touching parts and classify the type of joint or gear polygonal models of mechanical parts. First all the edges with dihe-
configuration between those parts. Based on this classification, we dral angle between adjacent faces exceeding threshold θs (40◦ in our
create the interaction graph nodes, store any relevant motion param- implementation), are marked as sharp. Then starting from a random
eters, and link touching parts with the appropriate type of edge. seed triangle as a segment, we greedily gather neighboring triangles
into the segment using a floodfill algorithm, while making sure not
5.1 Part Analysis to cross any sharp edge. We repeat this segmentation process until
all triangles have been partitioned into segments separated by sharp
For each part, we estimate its (potential) axes, along with attributes edges. Segments with only few sharp edges (less than 10 in our
like side profile, radius, teeth count, and pitch as applicable. Based experiments) are discarded as insignificant. Boundary loops of the
on these attributes we classify the type of the part. In Section 5.2, remaining segments are used as sharp edge feature loops for the
we explain how we use these attributes to determine how motion is remaining analysis (see Figure 5). Note that an edge can belong
transmitted across parts in contact with one another. In this interac- to up to two loops. This simple loop extraction procedure is suffi-
tion analysis phase we also use the motion information to refine our cient to handle clean input geometry of mechanical parts with sharp
classification of the part type. features like in CAD models. However, for parts with few sharp
edges, boundaries of proxy segments computed using variational
Assumptions. Inferring inter-relations between parts and their shape approximation can be used as feature loops [Cohen-Steiner
motion directly from the geometry of a static configuration of a et al. 2004; Mehra et al. 2009].
mechanical assembly can be ambiguous. However, making simple
assumptions about the assembly allows us to semi-automatically rig Next we fit least squares (parts of) circles to the sharp edge loops
up its motion. We assume that parts usually rotate about a symmetry and based on the residual error we identify circular loops. For each
or dominant axis, translate along translational symmetry directions, circle loop li , besides its center ci and radius ri , we also obtain its
or engage in screw motion along a helical axis, as applicable. Un- (canonical) axis direction ai as the normal to its containing plane.
symmetric parts are assumed to remain fixed. Such parts typically
constitute the support structures of mechanical assemblies. Levers,
belts and chains have few dominating geometric characteristics.
Thus, we expect the user to identify such parts. Our system au- y
time time
contact profile
separated
space-time surface in contact
Figure 7: A time-varying interaction graph. (Top) Graph edges for the rack-and-pinion configuration are active at different times during
the motion cycle. (Bottom) We use a space-time construction to find the time interval for each edge. When the contact surfaces separate or
self-intersect, we identify end of a cycle. We show the space-time surface analysis for the top-left configuration.
For each part, all its estimated circle loops {li } are clustered to ex- as members (see Figure 8). Finally, we fit least squares cylinders
tract its potential axes of rotation. First we group loops {li } with and cones to the side regions, using the rotational axis and respec-
similar axis direction, i.e., we cluster the axes {ai } in the line space tive loop radius to initialize the non-linear fitting. Detected cylinder
parameterized by their altitude-azimuthal angles. For each detected or conical side profiles are later used to classify joint types, e.g.,
group, we further partition its elements based on the projection of cylinder-on-plane, cone-on-cone (bevel), etc.
their centers in a direction along the cluster representative axis, cho-
We obtain the remaining relevant part attributes from the sharp edge
sen as the centroid of the cluster in the line space. We use a radius
loops. Their radii give us estimates for inner and outer radii of parts.
of 5 degrees for clustering axes.
For parts with cylindrical, conical, or helical regions, we count the
This simple grouping scheme works well because mechanical mod- intersection of the sharp edge loops with respect to their fitted cir-
els are largely designed from canonical geometrical primitives, and cles to get the teeth count (see Figure 8). To rule out outlier parts
have their underlying axes of the circle loops well aligned. We con- like rectangular bars, we consider rotationally symmetric parts to
sider the clustered directions to be potential axes of a part, if their be gears if they exhibit n-fold symmetry with n > 4.
cluster sizes are significant. We rank the directions based on the
Finally, for levers, chains and belts, which are essentially 1D struc-
number of elements in their corresponding clusters (see Figure 6).
tures, we use local principal component analysis to determine the
These axes directions, both for moving or static parts, often contain
dominant direction for the part.
important alignment cues for neighboring parts, e.g., as potential
shaft locations for axles.
5.2 Interaction Analysis
Cylindrical or conical parts have similar rotational symmetry and
similar sharp edge loop clusters. To differentiate between them we To build the interaction graph and estimate its parameters, we pro-
consider their (approximate) side profiles. We partition rotationally ceed in three steps: we compute the topology of the interaction
symmetric parts into cap and side regions. Let ai denote the rota- graph based on the contact relationships between parts; we then
tional axis of part Pi . Each face t j ∈ Pi is classified as a cap face if its classify the type of interaction at each edge (i.e., the type of joint or
normal n j is such that |n j · ai | ≈ 1, otherwise it is marked as a side gear configuration); finally, we compute the motion of each part in
face. We then build connected components of faces with the same the assembly.
labels, and discard components that have only few triangle faces
Contact detection. We use the contact relationships between
fitted cone parts to determine the topology of the interaction graph. Follow-
cap segment
ing the approach of Agrawala et al. [2003], we consider each pair
of parts in the assembly and compute the closest distance between
radius
side segment
them. If this distance is less than a threshold α , we consider the
parts to be in contact, and we add an edge between their corre-
sponding nodes in the interaction graph. We set α to be 0.1% of the
diagonal of the assembly bounding box in our experiments.
fitted cylinder
fitted circle As assemblies move, their contact relationships evolve. Edges ei j
in the interaction graph may appear or disappear over time. To
Figure 8: Sharp edge loops are fitted with circles, and regular loops compute the evolution of the interaction graph we establish contact
identified. The dominant part axis is used to partition the part into relations using a space-time construction. Suppose at time t, two
cap- and side-regions, which are then tested for cylindrical or cone parts Pi and Pj are in contact and we have identified their joint-
fits for determining region types. type (see later). Based on the joint-type we estimate their relative
driver driver
on-line, rack-and-pinion, worm gear, bevel gear, helical gear. For
cylinder-on-plane, one part is cylindrical with radius matching the
distance between the cylinder axis and the plane. For cylinder-on-
line, e.g., belts, pulley ropes, the principal direction of the 1D part
interaction graph is tangent to the cylinder. For a worm gear, one part is helical and
the other cylindrical. If both parts are conical with their cone angles
summing up to 90 degrees, we mark a bevel gear. When one part
exhibits rotational symmetry, and the other translational symmetry,
and their teeth width match, we flag a rack-and-pinion arrangement.
free
Our junction classification rules are carefully chosen and designed
based on standard mechanical assemblies and can successfully cat-
Figure 9: Planetary gear arrangement. The same arrangement of egorize most joints automatically, as found in our extensive exper-
mechanical parts can have largely different motion depending on imentation. However, our system also allows the user to intervene
the chosen driver and the part constraints. The blue gear can be and rectify misclassifications, as shown in the supplementary video.
free (left), or can be marked to be fixed (right).
Compute motion. Mechanical assemblies are brought to life by
motion parameters and compute their positions at subsequent times an external force applied to a driver and propagated to other parts
t + ∆t, t + 2∆t, etc. We stack the surfaces in 4D using time as the according to junction types and part attributes. In our system, once
fourth dimension and connect corresponding points across adjacent the user indicates the driver, motion is transferred to the other con-
time slices. nected parts through a breadth-first graph traversal of the interaction
Often, due to symmetry considerations, it is sufficient to work with graph G, starting with the driver-node as the root. We employ sim-
2D cross sectional curves of parts and construct the space-time sur- ple forward-kinematics to compute relative speed at any node based
face in 3D (see Figure 7). By extracting contact relations between on the joint type with its parent [Davidson and Hunt 2004]. For
the space-time surfaces (as an instance of local shape matching), we example, for a cylinder-on-cylinder joint, if motion from a cylinder
infer the time interval [t, t + n∆t] when the parts Pi and Pj remain with radius r1 and angular velocity ω1 is transmitted to another with
in contact, and hence the connection ei j survives. The same method radius r2 , then the imparted angular velocity ω2 is ω1 r1 /r2 . Our
applies when the contact remains valid, but the junction parameters approach handles graphs with loops (e.g., planetary gears). Since
change, e.g., variable speed gear (see Figure 3). In this case, the we assume our input models are consistent assemblies, even when
space time surfaces become separated or start to intersect, as the multiple paths exist between a root node and another node, the final
junction parameter changes. Afterwards, we look for new contacts motion of the node does not depend on the graph traversal path.
at the new event time, and continue building the graph. Note, we When we have an additional constraint at a node, e.g., a node is
implicitly assume that the junction part contacts and parameters fixed or restricted to translate only along an axis, we perform a con-
change discretely over the motion cycle allowing us to perform such strained optimization to find a solution. For example in Figure 9-
an event based estimation. Hence we cannot handle continuously right, when the green part is the driver, and the blue part is fixed, the
evolving junctions that may occur for parts like elliptic gears. As- intermediate nodes rotate both about their individual axes and also
suming that the relative speeds between parts of an assembly are about the green cylinder to satisfy constraints on both their edges.
reasonable, we used a fixed sampling ∆t = 0.1sec with the default Since we assume that the input assembly is a valid one and does not
speed for the driver part set to angular velocity of .1 radian/sec, or self-penetrate during its motion cycle, we do not perform any col-
translational velocity of 0.1 unit/sec, as applicable. lision detection in our system. If the detected motion is wrong, the
user can intervene and correct misclassifications. Such intervention
Interaction classification. Having established the connectivity of was rarely needed for the large set of assemblies we tried.
the graph, we now determine the relevant attributes for each edge
(i.e., we classify the corresponding junction and estimate the mo- 6 Visualization
tion parameters for the relevant parts). We categorize a junction
between nodes P1 and P2 using their relative spatial arrangement
and the individual part attributes. Specifically, we classify junctions Using the computed interaction graph, our system automatically
based on the relation between the part axes a1 and a2 , and then generates how things work visualizations based on the design
check if the part attributes agree at the junctions. For parts with guidelines discussed in Section 3. Here, we present algorithms for
multiple potential axes, we consider all pairs of axes. computing arrows, highlighting both the causal chain and important
key frames of motion, and generating exploded views.
Parallel axes: When the axes are nearly parallel, i.e., |a1 · a2 | ≈ 1,
the junction can be cylinder-on-cylinder (e.g. yellow and green
gears in Figure 9), or cylinder-in-cylinder type (e.g. yellow and 6.1 Computing Motion Arrows
blue gears in Figure 9). For the former r1 + r2 (roughly) equals
the distance between the axes, e.g., spur gears, helical gear; while For static illustrations, our system automatically computes arrows
for the latter |r1 − r2 | (roughly) equals the distance between the from the user-specified viewpoint. We support three types of arrows
axes, e.g., inner ring gears, (belong to) planetary gears. Note for (see Figure 10): cap arrows, side arrows, and translational arrows.
cylinder-on-cylinder, the cylinders can rotate about their individual Our algorithm for generating these arrows consists of three steps: 1)
axes, while simultaneously one cylinder can rotate about the other determine how many arrows of each type to add, 2) compute initial
one, e.g., (subpart of) planetary configuration (see Figure 9). arrow placements, and 3) refine arrow placements to improve their
Coaxial: As a special case of parallel axes, when both the axes are visibility.
also contained by a single line, the junction is marked coaxial. This For each (non-coaxial) edge in the interaction graph, based on the
junction type is commonly encountered in wheels, cams, cranks, junction type, we create two arrows, one associated with each node
axles, etc. connected by the graph edge. We refer to such arrows as contact-
Orthogonal axes: When the axes are nearly orthogonal, i.e., based arrows, as they highlight contact relations. We add contact
a1 · a2 ≈ 0, the junction can be a cylinder-on-plane, cylinder- arrows using the following rules:
cylinder-on-cylinder joints: we add cap arrows on both parts;
cylinder-in-cylinder joints: we add a cap arrow for the part inside
and a side arrow for the one outside;
cylinder-on-plane joints: we add a cap arrow on the cylinder and a
translational arrow for the translational part;
bevel gears: we add side arrows on both (conical) parts;
worm-gear: we add a cap arrow on the cylinder and a side arrow
on the helical part.
Note that these rules do not add arrows for certain junction types
(e.g., coaxial joints). Thus, after applying the rules, we add a non-
contact arrow to any part that does not already have an associated
contact arrow. For example, we place a side arrow around a coaxial
joint. Furthermore, if a cylindrical part is long, a single arrow may
not be sufficient to effectively convey the movement of the part. In
this case we add an additional non-contact side arrow to the part.
Thus, a part may have multiple arrows assigned to it.
(a) (b)
Having decided how many arrows to add and their part associa- Hammer Drill
tions, we compute their initial positions as follows. During the part
analysis, we partitioned each part into cap and side segments (see
Section 5). Using the z-buffer, we identify the cap and side face
segments with the largest visible areas under self-occlusion and
also under occlusion due to other parts. These segments serve as
candidate locations for arrow placement; we place side arrows at
the middle of the side segment with maximal score (computed as
a combination of visibility and length of the side segment) and cap
arrows right above the cap segment with maximal visibility. For (c)
contact-based side and cap arrows, we move the arrow within the Chain driver
chosen segment as close as possible to the corresponding contact
point. Non-contact translational arrows are placed midway along
the translational axis with arrow heads facing the viewer. The local
coordinate frame of the arrows are determined based on the direc-
tional attributes of the corresponding parts, while the arrow widths Figure 11: Motion arrow results. To convey how parts move,
are set to a default value. The remaining parameters of the arrows our system automatically computes motion arrows from the user-
(d, r, θ in Figure 10) are derived in proportion to the part parameters specified viewpoint. Here, we manually specified the lever in the
like its axis, radius, side/cap segment area. We position non-contact hammer model (a) and the belt in the chain driver model (c); our
side arrows such that the viewer sees the arrow head face-on. system automatically identifies the types of all the other parts.
Our initial arrow placement algorithm puts each arrow on a max-
imally visible cap or side segment. However, we can still improve arrow crosses a prescribed threshold (50% in our examples). Cap
arrow visibility by optimizing arrow placement within each seg- arrows viewed at oblique angles can be difficult to interpret. In such
ment. For cap arrows, we allow freedom to increase or decrease the cases (angles greater than 70 degrees in our system), we switch cap
radius for placement; while for side arrows we allow the freedom to arrows to side arrows, and locally adjust their parameters for better
translate along and rotate about the part axis. Optimization proceeds visibility.
greedily, simultaneously exploring both directions, by taking small
steps proportional to respective arrow thickness. We terminate the 6.2 Highlighting the Causal Chain
process when the arrow head is fully visible and the visibility of the
z
max. cap segment non-contact
To emphasize the causal chain of actions, our system generates a
translational
y arrow
arrows
sequence of frames that highlights the propagation of motions and
interactions from the driver throughout the rest of the assembly.
contact-based
x arrows
d
z
Starting from the root of the interaction graph, we perform a breadth
y
cap arrow first traversal. At each traversal step, we compute a set of nodes S
T x
that includes the frontier of newly visited nodes, as well as any
r previously visited nodes that are in contact with this frontier. We
z then generate a frame that highlights S by rendering all other parts
y
side arrow in a desaturated manner. To emphasize the motion of highlighted
T x max. side segment parts, each frame includes any non-contact arrow whose parent part
r before arrow optimization
is highlighted, as well as any contact-based arrow whose two as-
sociated parts are both highlighted. If a highlighted part only has
Figure 10: (Left) Translational, cap and side arrows. Arrows are contact arrows and none of them are included based on this rule,
first added based on the interaction graph edges, and then to the we add the part’s longest contact arrow to the frame to ensure that
moving parts without (sufficient) arrow assignment. The initial ar- every highlighted part has at least one arrow. In addition, arrows as-
row placement can suffer from occlusion (right inset), which is fixed sociated with previously visited parts are rendered in a desaturated
using a refinement step (center). manner. For animated visualizations, we allow the user to interac-
displacement
Gt
(2) (2)
(1)
(3)
(4)
time (1) (2) (3) (4)
Figure 12: Keyframes for depicting periodic motion of a piston-engine. Because of symmetry across parts and across motion (periodic),
snapshot times are decided based on the positional extremes of the piston tops.
tively step through the causal chain while the animation plays; at a variety of sources; for details, please refer to the Acknowledge-
each step, we highlight parts and arrows as described above. ments section. Figures 1, 11–14 show static illustrations of all ten
models. Other than specifying the driver part and its direction of
6.3 Highlighting Important Key Frames of Motion motion, no additional user assistance was required to compute the
interaction graph for seven of the models. For the drum, hammer
and chain driver models, we manually specified lever, cam and
As explained in Section 3, some assemblies contain parts that move
chain parts, respectively. In all of our results, we color the driver
in complex ways (e.g., the direction of motion changes periodi-
blue, fixed parts dark grey, and all other parts light grey. We render
cally). Thus, static illustrations often include key frames that help
translation arrows in green, and side and cap arrows in red.
clarify such motions. We automatically compute key frames of mo-
tion by examining each translational part in the model; if the part
changes direction, we add key frames at the critical times when Our results demonstrate how the visual techniques described in Sec-
the part is at its extremal positions. However, since the instanta- tion 3 help convey the causal chain of motions and interactions
neous direction of motion for a part is undefined exactly at these that characterize the operation of mechanical assemblies. For ex-
critical times, we canonically freeze time δ t after the critical time ample, the arrows in Figure 14a not only indicate the direction
instances to determine which direction the part is moving in (see of rotation for each gear, their placement near contact points also
Figure 12). Additionally, for each part, we also add middle frames emphasizes the interactions between parts. The frame sequence in
between extrema-based keyframes to help the viewer easily estab- Figure 14b shows how the assembly transforms the rotation of the
lish correspondence between moving parts (see F05 model example driving handle through a variety of gear configurations, while the
in the accompanying video). However, if such frames already exist sequence in Figure 14c conveys both the causal chain of interactions
as extrema-based keyframes of other parts, we do not add the addi- (frames 1–3) as well as the back-and-forth motion of the horizontal
tional frames, e.g., in the piston-engine example (see Figure 12). rack (frames 3–6) as it engages alternately with the two circular
gears. Finally, our animated results (see video) show how sequential
Our system can also generate a single frame sequence that high-
lights both the causal chain and important key frames of motion. As
we traverse the interaction graph to construct the causal chain frame
sequence, we check whether any newly highlighted part exhibits
complex motion. If so, we insert key frames to convey the motion
of the part and then continue traversing the graph (see Figure 14c).
F15
(c)
(b)
(a)
drum
(d)
(1)
(1)
(1)
(2)
(2)
(2)
(1)
(3)
(3)
(3)
(5)
(2)
(4)
(4)
(4)
(6)
(3)
(5)
(5)
(5)
(7)
(6)
(6)
(6)
(4)
Figure 14: Illustration results. We used our system to generate these how things work illustrations from 3D input models. For each model, we
specified the driver part and its direction of motion. In addition, we manually specified the levers in the drum (c). From this input, our system
automatically computes the motions and interactions of all assembly parts and generates motion arrows and frame sequences. We created
the zoomed-in insets by hand.