0% found this document useful (0 votes)
169 views

08.17.layered Depth Images

This paper presents two new image-based rendering methods for efficiently generating novel views: 1. Sprite with Depth warps sprites with depth information to add parallax effects and handle smoothly varying surfaces without gaps. 2. Layered Depth Images (LDI) represent a scene from a single view with multiple depth pixels along each line of sight to handle complex geometry. As the virtual camera moves, previously occluded surfaces may be rendered from stored depth layers. Both methods render multiple frames per second on a PC by avoiding expensive operations like searching input images or polygon scan conversion. LDI size grows linearly with scene depth complexity.

Uploaded by

Alessio
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
169 views

08.17.layered Depth Images

This paper presents two new image-based rendering methods for efficiently generating novel views: 1. Sprite with Depth warps sprites with depth information to add parallax effects and handle smoothly varying surfaces without gaps. 2. Layered Depth Images (LDI) represent a scene from a single view with multiple depth pixels along each line of sight to handle complex geometry. As the virtual camera moves, previously occluded surfaces may be rendered from stored depth layers. Both methods render multiple frames per second on a PC by avoiding expensive operations like searching input images or polygon scan conversion. LDI size grows linearly with scene depth complexity.

Uploaded by

Alessio
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Layered Depth Images

Jonathan Shade Steven Gortler Li-wei Hey Richard Szeliskiz


University of Washington  Harvard University y Stanford University z Microsoft Research

Abstract errors increase with the amount of depth variation in the real part
of the scene captured by the sprite. The amount of virtual camera
In this paper we present a set of efficient image based rendering motion away from the point of view of sprite creation also increases
methods capable of rendering multiple frames per second on a PC. the error. Errors decrease with the distance of the geometry from
The first method warps Sprites with Depth representing smooth sur- the virtual camera.
faces without the gaps found in other techniques. A second method
for more general scenes performs warping from an intermediate The second recent extension is to add depth information to an im-
representation called a Layered Depth Image (LDI). An LDI is a age to produce a depth image and to then use the optical flow that
view of the scene from a single input camera view, but with mul- would be induced by a camera shift to warp the scene into an ap-
tiple pixels along each line of sight. The size of the representa- proximation of the new view [2, 22].
tion grows only linearly with the observed depth complexity in the Each of these methods has its limitations. Simple sprite warping
scene. Moreover, because the LDI data are represented in a single cannot produce the parallax induced when parts of the scenes have
image coordinate system, McMillan’s warp ordering algorithm can sizable differences in distance from the camera. Flowing a depth
be successfully adapted. As a result, pixels are drawn in the output image pixel by pixel, on the other hand, can provide proper parallax
image in back-to-front order. No z-buffer is required, so alpha- but will result in gaps in the image either due to visibility changes
compositing can be done efficiently without depth sorting. This when some portion of the scene become unoccluded, or when a
makes splatting an efficient solution to the resampling problem. surface is magnified in the new view.
Some solutions have been proposed to the latter problem. Laveau
1 Introduction and Faugeras suggest performing a backwards mapping from the
Image based rendering (IBR) techniques have been proposed as an output sample location to the input image [14]. This is an expen-
efficient way of generating novel views of real and synthetic ob- sive operation that requires some amount of searching in the input
jects. With traditional rendering techniques, the time required to image. Another possible solution is to think of the input image as a
render an image increases with the geometric complexity of the mesh of micro-polygons, and to scan-convert these polygons in the
scene. The rendering time also grows as the requested shading com- output image. This is an expensive operation, as it requires a poly-
putations (such as those requiring global illumination solutions) be- gon scan-convert setup for each input pixel [18], an operation we
come more ambitious. would prefer to avoid especially in the absence of specialized ren-
dering hardware. Alternatively one could use multiple input images
The most familiar IBR method is texture mapping. An image is from different viewpoints. However, if one uses n input images, one
remapped onto a surface residing in a three-dimensional scene. Tra- effectively multiplies the size of the scene description by n, and the
ditional texture mapping exhibits two serious limitations. First, the rendering cost increases accordingly.
pixelization of the texture map and that of the final image may be
vastly different. The aliasing of the classic infinite checkerboard This paper introduces two new extensions to overcome both of these
floor is a clear illustration of the problems this mismatch can cre- limitations. The first extension is primarily applicable to smoothly
ate. Secondly, texture mapping speed is still limited by the surface varying surfaces, while the second is useful primarily for very com-
the texture is applied to. Thus it would be very difficult to create plex geometries. Each method provides efficient image based ren-
a texture mapped tree containing thousands of leaves that exhibits dering capable of producing multiple frames per second on a PC.
appropriate parallax as the viewpoint changes. In the case of sprites representing smoothly varying surfaces, we
Two extensions of the texture mapping model have recently been introduce an algorithm for rendering Sprites with Depth. The algo-
presented in the computer graphics literature that address these two rithm first forward maps (i.e., warps) the depth values themselves
difficulties. The first is a generalization of sprites. Once a complex and then uses this information to add parallax corrections to a stan-
scene is rendered from a particular point of view, the image that dard sprite renderer.
would be created from a nearby point of view will likely be similar. For more complex geometries, we introduce the Layered Depth Im-
In this case, the original 2D image, or sprite, can be slightly altered age, or LDI, that contains potentially multiple depth pixels at each
by a 2D affine or projective transformation to approximate the view discrete location in the image. Instead of a 2D array of depth pixels
from the new camera position [31, 27, 15]. (a pixel with associated depth information), we store a 2D array of
The sprite approximation’s fidelity to the correct new view is highly layered depth pixels. A layered depth pixel stores a set of depth pix-
dependent on the geometry being represented. In particular, the els along one line of sight sorted in front to back order. The front
element in the layered depth pixel samples the first surface seen
along that line of sight; the next pixel in the layered depth pixel
samples the next surface seen along that line of sight, etc. When
rendering from an LDI, the requested view can move away from
the original LDI view and expose surfaces that were not visible in
the first layer. The previously occluded regions may still be ren-
dered from data stored in some later layer of a layered depth pixel.
There are many advantages to this representation. The size of the
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998

Environment Map mediate foreground objects.


In the sections that follow, we will concentrate on describing the
data structures and algorithms for representing and rapidly render-
ing Sprites with Depth and Layered Depth Images.
Planar Sprites

2 Previous Work
Over the past few years, there have been many papers on image
Sprite with Depth based rendering. In [17], Levoy and Whitted discuss rendering
point data. Chen and Williams presented the idea of rendering
Layered from images [2]. Laveau and Faugeras discuss IBR using a back-
Depth wards map [14]. McMillan and Bishop discuss IBR using cylin-
Image drical views [22]. Seitz and Dyer describe a system that allows a
user to correctly model view transforms in a user controlled image
Polygons morphing system [29]. In a slightly different direction, Levoy and
Hanrahan [16] and Gortler et al. [7] describe IBR methods using a
large number of input images to sample the high dimensional radi-
ance function.

Camera Center for LDI


Max uses a representation similar to an LDI [19], but for a purpose
and Sprite with Depth quite different than ours; his purpose is high quality anti-aliasing,
while our goal is efficiency. Max reports his rendering time as 5
Viewing Region minutes per frame while our goal is multiple frames per second.
Max warps from n input LDIs with different camera information;
the multiple depth layers serve to represent the high depth com-
Figure 1 Different image based primitives can serve well plexity of trees. We warp from a single LDI, so that the warping
depending on distance from the camera can be done most efficiently. For output, Max warps to an LDI.
This is done so that, in conjunction with an A-buffer, high quality,
but somewhat expensive, anti-aliasing of the output picture can be
performed.
representation grows linearly only with the depth complexity of the
image. Moreover, because the LDI data are represented in a single Mark et al.[18] and Darsa et al.[4] create triangulated depth maps
image coordinate system, McMillan’s ordering algorithm [21] can from input images with per-pixel depth. Darsa concentrates on
be successfully applied. As a result, pixels are drawn in the output limiting the number of triangles by looking for depth coherence
image in back to front order allowing proper alpha blending without across regions of pixels. This triangle mesh is then rendered tra-
depth sorting. No z-buffer is required, so alpha-compositing can be ditionally taking advantage of graphics hardware pipelines. Mark
done efficiently without explicit depth sorting. This makes splatting et al.describe the use of multiple input images as well. In this aspect
an efficient solution to the reconstruction problem. of their work, specific triangles are given lowered priority if there
is a large discontinuity in depth across neighboring pixels. In this
Sprites with Depth and Layered Depth Images provide us with two case, if another image fills in the same area with a triangle of higher
new image based primitives that can be used in combination with priority, it is used instead. This helps deal with disocclusions.
traditional ones. Figure 1 depicts five types of primitives we may
wish to use. The camera at the center of the frustum indicates where Shade et al.[31] and Shaufler et al.[27] render complex portions
the image based primitives were generated from. The viewing vol- of a scene such as a tree onto alpha matted billboard-like sprites
ume indicates the range one wishes to allow the camera to move and then reuse them as textures in subsequent frames. Lengyel and
while still re-using these image based primitives. Snyder [15] extend this work by warping sprites by a best fit affine
transformation based on a set of sample points in the underlying
The choice of which type of image-based or geometric primitive
to use for each scene element is a function of its distance, its in-
ternal depth variation relative to the camera, as well as its internal
geometric complexity. For scene elements at a great distance from
the camera one might simply generate an environment map. The
environment map is invariant to translation and simply translates
as a whole on the screen based on the rotation of the camera. At
a somewhat closer range, and for geometrically planar elements,
traditional planar sprites (or image caches) may be used [31, 27].
The assumption here is that although the part of the scene depicted
in the sprite may display some parallax relative to the background
Epipolar Point
environment map and other sprites, it will not need to depict any Layered
parallax within the sprite itself. Yet closer to the camera, for ele- Depth
Image
ments with smoothly varying depth, Sprites with Depth are capable Camera
of displaying internal parallax but cannot deal with disocclusions Output
due to image flow that may arise in more complex geometric scene Camera
elements. Layered Depth Images deal with both parallax and dis-
occlusions and are thus useful for objects near the camera that also
contain complex geometries that will exhibit considerable parallax.
Finally, traditional polygon rendering may need to be used for im- Figure 2 Back to front output ordering

232
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998

3D model. These affine transforms are allowed to vary in time as can either be specified in world coordinates, AX + BY + CZ + D = 0,
the position and/or color of the sample points change. Hardware or it can be specified in the sprite’s coordinate system, ax1 + by1 +
considerations for such system are discussed in [32]. cz1 + d = 0. In the former case, we can form a new camera matrix
Ĉ1 by replacing the third row of C1 with the row [A B C D], while
Horry et al. [10] describe a very simple sprite-like system in which in the latter, we can compute Ĉ1 = PC1 , where
a user interactively indicates planes in an image that represent areas
in a given image. Thus, from a single input image and some user 2 3
1 0 0 0
supplied information, they can warp an image and provide approx- 60 1 0 07
P=4
imate three dimensional cues about the scene. a b c d5
The system presented here relies heavily on McMillan’s ordering 0 0 1 0
algorithm [21, 20, 22]. Using input and output camera information,
a warping order is computed such that pixels that map to the same (note that [A B C D] = [a b c d]C1 ).
location in the output image are guaranteed to arrive in back to front In either case, we can write the modified projection equation as
order.
2 3 2 3
In McMillan’s work, the depth order is computed by first finding w1 x1 X
6w1 y1 7 6Y 7
the projection of the output camera’s location in the input camera’s 4w1 d1 5 = Ĉ1 4Z 5 , (2)
image plane, that is, the intersection of the line joining the two cam-
era locations with the input camera’s image plane. The line joining w1 1
the two camera locations is called the epipolar line, and the inter-
where d1 = 0 for pixels on the plane. For pixels off the plane, d1 is
section with the image plane is called an epipolar point [6] (see
the scaled perpendicular distance to the plane (the scale factor is 1
Figure 1). The input image is then split horizontally and vertically
if A2 + B2 + C2 = 1) divided by the pixel to camera distance w1 .
at the epipolar point, generally creating 4 image quadrants. (If the
epipolar point lies off the image plane, we may have only 2 or 1 Given such a sprite, how do we compute the 2D transformation
regions.) The pixels in each of the quadrants are processed in a dif- associated with a novel view Ĉ2 ? The mapping between pix-
ferent order. Depending on whether the output camera is in front els (x1 , y1 , d1 , 1) in the sprite and pixels (w2 x2 , w2 y2 , w2 d2 , w2 ) in
of or behind the input camera, the pixels in each quadrant are pro- the output camera’s image is given by the transfer matrix T1,2 =
cessed either inward towards the epipolar point or outwards away Ĉ2 Ĉ1,1 .

from it. In other words, one of the quadrants is processed left to
right, top to bottom, another is processed left to right, bottom to For a flat sprite (d1 = 0), the transfer equation can be written as
top, etc. McMillan discusses in detail the various special cases that 2 3 2 3
arise and proves that this ordering is guaranteed to produce depth w2 x2 x1
ordered output [20]. 4w2 y2 5 = H1,2 4y1 5 (3)
w2 1
When warping from an LDI, there is effectively only one input cam-
era view. Therefore one can use the ordering algorithm to order the where H1,2 is the 2D planar perspective transformation (homogra-
layered depth pixels visited. Within each layered depth pixel, the phy) obtained by dropping the third row and column of T1,2 . The
layers are processed in back to front order. The formal proof of [20] coordinates (x2 , y2 ) obtained after dividing out w2 index a pixel ad-
applies, and the ordering algorithm is guaranteed to work. dress in the output camera’s image. Efficient backward mapping
techniques exist for performing the 2D perspective warp [8, 35], or
3 Rendering Sprites texture mapping hardware can be used.

Sprites are texture maps or images with alphas (transparent pixels) 3.1 Sprites with Depth
rendered onto planar surfaces. They can be used either for locally
caching the results of slower rendering and then generating new The descriptive power (realism) of sprites can be greatly enhanced
views by warping [31, 27, 32, 15], or they can be used directly as by adding an out-of-plane displacement component d1 at each pixel
drawing primitives (as in video games). in the sprite.1 Unfortunately, such a representation can no longer be
rendered directly using a backward mapping algorithm.
The texture map associated with a sprite can be computed by simply
choosing a 3D viewing matrix and projecting some portion of the Using the same notation as before, we see that the transfer equation
scene onto the image plane. In practice, a view associated with the is now
current or expected viewpoint is a good choice. A 3D plane equa- 2 3 2 3
tion can also be computed for the sprite, e.g., by fitting a 3D plane to w2 x2 x1
the z-buffer values associated with the sprite pixels. Below, we de- 4w2 y2 5 = H1,2 4y1 5 + d1 e1,2 , (4)
rive the equations for the 2D perspective mapping between a sprite w2 1
and its novel view. This is useful both for implementing a back-
ward mapping algorithm, and lays the foundation for our Sprites
where e1,2 is called epipole [6, 26, 12], and is obtained from the
with Depth rendering algorithm.
third column of T1,2 .

A sprite consists of an alpha-matted image I1 (x1 , y1 ), a 4 4 camera
Equation (4) can be used to forward map pixels from a sprite to a
matrix C1 which maps from 3D world coordinates (X, Y, Z, 1) into
new view. Unfortunately, this entails the usual problems associated
the sprite’s coordinates (x1 , y1 , z1 , 1),
with forward mapping, e.g., the necessity to fill gaps or to use larger
2 3 2 3
w1 x1 X
1 The d values can be stored as a separate image, say as 8-bit signed
6w1 y1 7 6Y 7 1
4 w1 z1 5 = C1 4Z 5 , (1) depths. The full precision of a traditional z-buffer is not required, since
w1 1 these depths are used only to compute local parallax, and not to perform
z-buffer merging of primitives. Furthermore, the d1 image could be stored
(z1 is the z-buffer value), and a plane equation. This plane equation at a lower resolution than the color image, if desired.

233
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998

splatting kernels, and the difficulty in achieving proper resampling. to compute where to look up the displacement d3 (x3 , y3 ), and form
Notice, however, that Equation (4) could be used to perform a back- the final address of the source sprite pixel using
ward mapping step by interchanging the 1 and 2 indices, if only we 2 3 2 3
knew the displacements d2 in the output camera’s coordinate frame. w1 x1 w3 x3
4w1 y1 5 = 4w3 y3 5 + d3 (x3 , y3 )e2,1 . (8)
A solution to this problem is to first forward map the displacements w1 w3
d1 , and to then use Equation (4) to perform a backward mapping
step with the new (view-based) displacements. While this may at
first appear to be no faster or more accurate than simply forward We can obtain a quicker, but less accurate, algorithm by omitting
warping the color values, it does have some significant advantages. the first step, i.e., the pure parallax warp from d1 to d3 . If we as-
sume the depth at a pixel before and after the warp will not change
First, small errors in displacement map warping will not be as ev-
significantly, we can use d1 instead of d3 in Equation (8). This still
ident as errors in the sprite image warping, at least if the displace-
gives a useful illusion of 3-D parallax, but is only valid for a much
ment map is smoothly varying (in practice, the shape of a simple
smaller range of viewing motions (see Figure 3).
surface often varies more smoothly than its photometry). If bilinear
or higher order filtering is used in the final color (backward) resam- Another variant on this algorithm, which uses somewhat more stor-
pling, this two-stage warping will have much lower errors than for- age but fewer computations, is to compute a 2-D displacement field
ward mapping the colors directly with an inaccurate forward map. , ,
in the first pass, u3 (x3 , y3 ) = x1 x3 , v3 (x3 , y3 ) = y1 y3 , where
We can therefore use a quick single-pixel splat algorithm followed (x3 , y3 ) is computed using the pure parallax transform in Equation

by a quick hole filling, or alternatively, use a simple 2 2 splat. (6). In the second pass, the final pixel address in the sprite is com-
puted using
The second main advantage is that we can design the forward warp-
ing step to have a simpler form by factoring out the planar perspec-      
x1 x u (x , y )
tive warp. Notice that we can rewrite Equation (4) as = 3 + 3 3 3 , (9)
y1 y3 v3 (x3 , y3 )
2 3 2 3
w2 x2 x3 where this time (x3 , y3 ) is computed using the transform given in
4w2 y2 5 = H1,2 4y3 5 , (5) Equation (7).
w2 1
We can make the pure parallax transformation (6) faster by avoid-
with ing the per-pixel division required after adding homogeneous coor-
dinates. One way to do this is to approximate the parallax trans-
2 3 2 3
w3 x3 x1 formation by first moving the epipole to infinity (setting its third
4w3 y3 5 = 4y1 5 + d1 e1,2 , (6) component to 0). This is equivalent to having an affine parallax
w3 1 component (all points move in the same direction, instead of to-
wards a common vanishing point). In practice, we find that this still
where e1,2 = H1,2,1 e1,2 . This suggests that Sprite with Depth ren- provides a very compelling illusion of 3D shape.
dering can be implemented by first shifting pixels by their local Figure 3 shows some of the steps in our two-pass warping algo-
parallax, filling any resulting gaps, and then applying a global ho- rithm. Figures 3a and 3f show the original sprite (color) image and
mography (planar perspective warp). This has the advantage that the depth map. Figure 3b shows the sprite warped with no parallax.
it can handle large changes in view (e.g., large zooms) with only a Figures 3g, 3h, and 3i shows the depth map forward warped with
small amount of gap filling (since gaps arise only in the first step, only pure parallax, only the perspective projection, and both. Fig-
and are due to variations in displacement). ure 3c shows the backward warp using the incorrect depth map d1
(note how dark “background” colors are mapped onto the “bump”),
Our novel two-step rendering algorithm thus proceeds in two whereas Figure 3d shows the backward warp using the correct depth
stages: map d3 . The white pixels near the right hand edge are a result of
using only a single step of gap filling. Using three steps results in
1. forward map the displacement map d1 (x1 , y1 ), using only the the better quality image shown in Figure 3e. Gaps also do not ap-
parallax component given in Equation (6) to obtain d3 (x3 , y3 ); pear for a less quickly slanting d maps, such as the pyramid shown
in Figure 3j.
2a. backward map the resulting warped displacements d3 (x3 , y3 )
using Equation (5) to obtain d2 (x2 , y2 ) (the displacements in

The rendering times for the 256 256 image shown in Figure 3 on a
300 MHz Pentium II are as follows. Using bilinear pixel sampling,
the new camera view); the frame rates are 30 Hz for no z-parallax, 21 Hz for “crude” one-
pass warping (no forward warping of d1 values), and 16 Hz for
2b. backward map the original sprite colors, using both the ho- two-pass warping. Using nearest-neighbor resampling, the frame
mography H2,1 and the new parallax d2 as in Equation (4) rates go up to 47 Hz, 24 Hz, and 20 Hz, respectively.
(with the 1 and 2 indices interchanged), to obtain the image
corresponding to camera C2 . 3.2 Recovering sprites from image sequences

The last two operations can be combined into a single raster scan While sprites and sprites with depth can be generated using com-
over the output image, avoiding the need to perspective warp d3 puter graphics techniques, they can also be extracted from image
into d2 . More precisely, for each output pixel (x2 , y2 ), we compute sequences using computer vision techniques. To do this, we use a
(x3 , y3 ) such that layered motion estimation algorithm [33, 1], which simultaneously
segments the sequence into coherently moving regions, and com-
2 3 2 3 putes a parametric motion estimate (planar perspective transforma-
w3 x3 x2 tion) for each layer. To convert the recovered layers into sprites, we
4w3 y3 5 = H2,1 4y2 5 (7) need to determine the plane equation associated with each region.
w3 1 We do this by tracking features from frame to frame and applying

234
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998

(a) (b) (c) (d) (e)

(f) (g) (h) (i) (j)


Figure 3 Plane with bump rendering example: (a) input color (sprite) image I1 (x1 , y1 ); (b) sprite warped by homography only
(no parallax); (c) sprite warped by homography and crude parallax (d1 ); (d) sprite warped by homography and true parallax
(d2 ); (e) with gap fill width set to 3; (f) input depth map d1 (x1 , y1 ); (g) pure parallax warped depth map d3 (x3 , y3 ); (h) forward
warped depth map d2 (x2 , y2 ); (i) forward warped depth map without parallax correction; (j) sprite with “pyramid” depth map.

(a) (b) (c)

(d) (e)

(f) (g) (h)


Figure 4 Results of sprite extraction from image sequence: (a) third of five images; (b) initial segmentation into six layers;
(c) recovered depth map; (d) the five layer sprites; (e) residual depth image for fifth layer; (f) re-synthesized third image (note
extended field of view); (g) novel view without residual depth; (h) novel view with residual depth (note the “rounding” of the
people).

235
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998

b LayeredDepthImage =
Layered Depth Pixel Camera: camera
Pixels[0..xres-1,0..yres-1]: array of LayeredDepthPixel
Depth Pixel
d The layered depth image contains camera information plus an array
a
c of size xres by yres layered depth pixels. In addition to image data,
each layered depth pixel has an integer indicating how many valid
depth pixels are contained in that pixel. The data contained in the
depth pixel includes the color, the depth of the object seen at that
a2 pixel, plus an index into a table that will be used to calculate a splat
size for reconstruction. This index is composed from a combina-
Pixel tion of the normal of the object seen and the distance from the LDI
C3 C1 C2 camera.

Figure 5 Layered Depth Image In practice, we implement Layered Depth Images in two ways.
When creating layered depth images, it is important to be able to
efficiently insert and delete layered depth pixels, so the Layers ar-
ray in the LayeredDepthPixel structure is implemented as a linked
a standard structure from motion algorithm to recover the camera list. When rendering, it is important to maintain spatial locality of
parameters (viewing matrices) for each frame [6]. Tracking several depth pixels in order to most effectively take advantage of the cache
points on each sprite enables us to reconstruct their 3D positions, in the CPU [13]. In Section 5.1 we discuss the compact render-time
and hence to estimate their 3D plane equations [1]. Once the sprite version of layered depth images.
pixel assignment have been recovered, we run a traditional stereo
algorithm to recover the out-of-plane displacements. There are a variety of ways to generate an LDI. Given a synthetic
scene, we could use multiple images from nearby points of view for
The results of applying the layered motion estimation algorithm to which depth information is available at each pixel. This informa-
the first five images from a 40-image stereo dataset2 are shown in tion can be gathered from a standard ray tracer that returns depth
Figure 4. Figure 4(a) shows the middle input image, Figure 4(b) per pixel or from a scan conversion and z-buffer algorithm where
shows the initial pixel assignment to layers, Figure 4(c) shows the the z-buffer is also returned. Alternatively, we could use a ray tracer
recovered depth map, and Figure 4(e) shows the residual depth map to sample an environment in a less regular way and then store com-
for layer 5. Figure 4(d) shows the recovered sprites. Figure 4(f) puted ray intersections in the LDI structure. Given multiple real
shows the middle image re-synthesized from these sprites, while images, we can turn to computer vision techniques that can infer
Figures 4(g–h) show the same sprite collection seen from a novel pixel correspondence and thus deduce depth values per pixel. We
viewpoint (well outside the range of the original views), both with will demonstrate results from each of these three methods.
and without residual depth-based correction (parallax). The gaps
visible in Figures 4(c) and 4(f) lie outside the area corresponding
4.1 LDIs from Multiple Depth Images
to the middle image, where the appropriate parts of the background
sprites could not be seen. We can construct an LDI by warping n depth images into a common
camera view. For example the depth images C2 and C3 in Figure 5
4 Layered Depth Images can be warped to the camera frame defined by the LDI (C1 in fig-
ure 5). 3 If, during the warp from the input camera to the LDI cam-
While the use of sprites and Sprites with Depth provides a fast era, two or more pixels map to the same layered depth pixel, their
means to warp planar or smoothly varying surfaces, more general Z values are compared. If the Z values differ by more than a preset
scenes require the ability to handle more general disocclusions and epsilon, a new layer is added to that layered depth pixel for each
large amounts of parallax as the viewpoint moves. These needs distinct Z value (i.e., NumLayers is incremented and a new depth
have led to the development of Layered Depth Images (LDI). pixel is added), otherwise (e.g., depth pixels c and d in figure 5),
the values are averaged resulting in a single depth pixel. This pre-
Like a sprite with depth, pixels contain depth values along with their processing is similar to the rendering described by Max [19]. This
colors (i.e., a depth pixel). In addition, a Layered Depth Image (Fig- construction of the layered depth image is effectively decoupled
ure 5) contains potentially multiple depth pixels per pixel location. from the final rendering of images from desired viewpoints. Thus,
The farther depth pixels, which are occluded from the LDI center, the LDI construction does not need to run at multiple frames per
will act to fill in the disocclusions that occur as the viewpoint moves second to allow interactive camera motion.
away from the center.
The structure of an LDI is summarized by the following conceptual 4.2 LDIs from a Modified Ray Tracer
representation:
By construction, a Layered Depth Image reconstructs images of a
scene well from the center of projection of the LDI (we simply dis-
DepthPixel = play the nearest depth pixels). The quality of the reconstruction
ColorRGBA: 32 bit integer from another viewpoint will depend on how closely the distribu-
Z: 20 bit integer tion of depth pixels in the LDI, when warped to the new viewpoint,
SplatIndex: 11 bit integer corresponds to the pixel density in the new image. Two common
events that occur are: (1) disocclusions as the viewpoint changes,
LayeredDepthPixel =
NumLayers: integer 3 Any arbitrary single coordinate system can be specified here. However,
Layers[0..numlayers-1]: array of DepthPixel we have found it best to use one of the original camera coordinate systems.
This results in fewer pixels needing to be resampled twice; once in the LDI
2 Courtesy of Dayton Taylor. construction, and once in the rendering process.

236
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998

Figure 6 An LDI consists of the 90 degree frustum exiting


one side of a cube. The cube represents the region of interest Figure 7 Intersections from sampling rays A and B are
in which the viewer will be able to move. added to the same layered depth pixel.

origin on the cube face and a direction from the cosine distribution
and (2) surfaces that grow in terms of screen space. For example, and cast the ray into the scene. There are two problems with this
when a surface is edge on to the LDI, it covers no area. Later, it simple scheme. First, such white noise distributions tend to form
may face the new viewpoint and thus cover some screen space. unwanted clumps. Second, since there is no coherence between
rays, complex scenes require considerable memory thrashing since
When using a ray tracer, we have the freedom to sample the scene rays will access the database in a random way [25]. The model
with any distribution of rays we desire. We could simply allow of the chestnut tree seen in the color images was too complex to
the rays emanating from the center of the LDI to pierce surfaces, sample with a pure stochastic method on a machine with 320MB of
recording each hit along the way (up to some maximum). This memory.
would solve the disocclusion problem but would not effectively
sample surfaces edge on to the LDI. Stratified Stochastic. To improve the coherence and distribution
of rays, we employ a stratified scheme. In this method, we divide
What set of rays should we trace to sample the scene, to best ap-
proximate the distribution of rays from all possible viewpoints we
  
the 4D space of rays uniformly into a grid of N N N N strata.
For each stratum, we cast M rays. Enough coherence exists within
are interested in? For simplicity, we have chosen to use a cubical a stratum that swapping of the data set is alleviated. Typical values
region of empty space surrounding the LDI center to represent the for N and M are 32 and 16, generating approximately 16 million
region that the viewer is able to move in. Each face of the viewing rays per cube face.
cube defines a 90 degree frustum which we will use to define a sin-
gle LDI (Figure 6). The six faces of the viewing cube thus cover Once a ray is chosen, we cast it into the scene. If it hits an object,
all of space. For the following discussion we will refer to a single and that object lies in the LDI’s frustum, we reproject the inter-
LDI. section into the LDI, as depicted in Figure 7, to determine which
layered depth pixel should receive the sample. If the new sample is
Each ray in free space has four coordinates, two for position and within an epsilon tolerance in depth of an existing depth pixel, the
two for direction. Since all rays of interest intersect the cube faces, color of the new sample is averaged with the existing depth pixel.
we will choose the outward intersection to parameterize the position Otherwise, the color, normal, and distance to the sample create a
of the ray. Direction is parameterized by two angles. new depth pixel that is inserted into the Layered Depth Pixel.
Given no a priori knowledge of the geometry in the scene, we as-
sume that every ray intersection the cube is equally important. To 4.3 LDIs from Real Images
achieve a uniform density of rays we sample the positional coor-
dinates uniformly. A uniform distribution over the hemisphere of The dinosaur model in Figure 13 is constructed from 21 pho-
directions requires that the probability of choosing a direction is tographs of the object undergoing a 360 degree rotation on a
proportional to the projected area in that direction. Thus, the di- computer-controlled calibrated turntable. An adaptation of Seitz
rection is weighted by the cosine of the angle off the normal to the and Dyer’s voxel coloring algorithm [30] is used to obtain the LDI
cube face. representation directly from the input images. The regular voxeliza-
tion of Seitz and Dyer is replaced by a view-centered voxelization
Choosing a cosine weighted direction over a hemisphere can be similar to the LDI structure. The procedure entails moving outward
accomplished by uniformly sampling the unit disk formed by the on rays from the LDI camera center and projecting candidate voxels
base of the hemisphere to get two coordinates of the ray direction, back into the input images. If all input images agree on a color, this
say x and y if the z-axis is normal to pthe disk. The third coordinate voxel is filled as a depth pixel in the LDI structure. This approach
, ,
is chosen to give a unit length (z = 1 x2 y2 ). We make the enables straightforward construction of LDI’s from images that do
selection within the disk by first selecting a point in the unit square, not contain depth per pixel.
then applying a measure preserving mapping [24] that maps the unit
square to the unit disk. 5 Rendering Layered Depth Images
Given this desired distribution of rays, there are a variety of ways
to perform the sampling: Our fast warping-based renderer takes as input an LDI along with
its associated camera information. Given a new desired camera po-
Uniform. A straightforward stochastic method would take as input sition, the warper uses an incremental warping algorithm to effi-
the number of rays to cast. Then, for each ray it would choose an ciently create an output image. Pixels from the LDI are splatted

237
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998

into the output image using the over compositing operation. The Surface
size and footprint of the splat is based on an estimated size of the φ1
reprojected pixel.
θ2
5.1 Space Efficient Representation
θ1
When rendering, it is important to maintain the spatial locality of d1
depth pixels to exploit the second level cache in the CPU [13]. To d2
this end, we reorganize the depth pixels into a linear array ordered φ2
from bottom to top and left to right in screen space, and back to Normal
front along a ray. We also separate out the number of layers in each Z2
layered depth pixel from the depth pixels themselves. The layered C1
depth pixel structure does not exist explicitly in this implementa-
tion. Instead, a double array of offsets is used to locate each depth
pixel. The number of depth pixels in each scanline is accumulated
into a vector of offsets to the beginning of each scanline. Within
each scanline, for each pixel location, a total count of the depth
pixels from the beginning of the scanline to that location is main- C2
tained. Thus to find any layered depth pixel, one simply offsets to
the beginning of the scanline and then further to the first depth pixel Figure 8 Values for size computation of a projected pixel.
at that location. This supports scanning in right-to-left order as well
as the clipping operation discussed later.

5.2 Incremental Warping Computation


The incremental warping computation is similar to the one used 2 3 2 3 2 3
for certain texture mapping operations [9, 28]. The geometry of x1 + 1 x1 1
6 y1 7 6 y1 7 6 0 7
this computation has been analyzed by McMillan [23], and efficient T1,2   
4 0 5 = T1,2 4 0 5 + T1,2 4 0 5 = start + xincr
computation for the special case of orthographic input images is
given in [3]. 1 1 0


Let C1 be the 4 4 matrix for the LDI camera. It is composed of
an affine transformation matrix, a projection matrix, and a viewport The warping algorithm proceeds using McMillan’s ordering algo-
 
matrix, C1 = V1 P1 A1 . This camera matrix transforms a point rithm [21]. The LDI is broken up into four regions above and below
from the global coordinate system into the camera’s projected im- and to the left and right of the epipolar point. For each quadrant,
age coordinate system. The projected image coordinates (x1 , y1 ), the LDI is traversed in (possibly reverse) scan line order. At the
obtained after multiplying the point’s global coordinates by C1 and beginning of each scan line, start is computed. The sign of xincr
dividing out w1 , index a screen pixel address. The z1 coordinate can is determined by the direction of processing in this quadrant. Each
be used for depth comparisons in a z buffer. layered depth pixel in the scan line is then warped to the output
Let C2 be the output camera’s matrix. Define the transfer matrix image by calling Warp. This procedure visits each of the layers in
as T1,2 = C2 C1,1 . Given the projected image coordinates of some
 back to front order and computes result to determine its location
point seen in the LDI camera (e.g., the coordinates of a in Figure 5), in the output image. As in perspective texture mapping, a divide is
this matrix computes the image coordinates as seen in the output required per pixel. Finally, the depth pixel’s color is splatted at this
camera (e.g., the image coordinates of a2 in camera C2 in Figure 5). location in the output image.
The following pseudo code summarizes the warping algorithm ap-
2 3 2 3 plied to each layered depth pixel.
x1 
x2 w2
6 y 7 6 y w 7
T1,2  4 1 5 = 4 2 2 5 = result
z1 z2  w2
1 w2
procedure Warp (ldpix, start, depth, xincr)
The coordinates (x2 , y2 ) obtained after dividing by w2 , index a pixel for k 0 to dpix.NumLayers-1
z1 ldpix.Layers[k].Z
address in the output camera’s image.
result start + z1 depth 
Using the linearity of matrix operations, this matrix multiply can ==cull if the depth pixel goes behind the output camera
be factored to reuse much of the computation from each iteration ==or if the depth pixel goes out of the output cam’s frustum
through the layers of a layered depth pixel; result can be computed if result.w > 0 and IsInViewport(result) then
as result result = result.w
== see next section
2 3 2 3 2 3

sqrtSize z2 lookupTable[ldpix.Layers[k].SplatIndex]
x1 x1 0 splat(ldpix.Layers[k].ColorRGBA, x2, y2, sqrtSize)
 6 y1 7 = T 6 y1 7 + z T 6 0 7 = start + z depth
    end if
T1,2 4 z1 5 1,2 4
0 5 1 1,2 4 1 5 1
== increment for next layered pixel on this scan line
1 1 0 start start + xincr
end for
To compute the warped position of the next layered depth pixel end procedure
along a scanline, the new start is simply incremented.

238
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998

5.3 Splat Size Computation Far Segment


To splat the LDI into the output image, we estimate the projected
area of the warped pixel. This is a rough approximation to the foot- Drawn Clipped
print evaluation [34] optimized for speed. The proper size can be
computed (differentially) as

(d1 )2 cos(2 ) res2 tan(fov1 =2)


size = Near Segment
(d2 )2 cos(1 ) res1 tan(fov2 =2)

where d1 is the distance from the sampled surface point to the LDI
camera, fov1 is the field of view of the LDI camera, res1 = (w1 h1 ),1
Desired
where w1 and h1 are the width and height of the LDI, and 1 is the View
LDI
angle between the surface normal and the line of sight to the LDI
camera (see Figure 8). The same terms with subscript 2 refer to the
output camera. Figure 9 LDI with two segments
It will be more efficient to compute an approximation of the square
root of size,
5.5 Clipping
p
p d1 cos(2 )res2 tan(fov1 =2)
size =
1
 p The LDI of the chestnut tree scene in Figure 11 is a large data set
d2 cos(1 )res1 tan(fov2 =2) containing over 1.1 million depth pixels. If we naively render this
p LDI by reprojecting every depth pixel, we would only be able to
cos(2 )res2 tan(fov1 =2)
 1
 dp
1
render at one or two frames per second. When the viewer is close
Z2 cos(1 )res1 tan(fov2 =2) to the tree, there is no need to flow those pixels that will fall out-
p side of the new view. Unseen pixels can be culled by intersecting
d1 cos(2 )res2 tan(fov1 =2)
 z2  p the view frustum with the frustum of the LDI. This is implemented
cos(1 )res1 tan(fov2 =2) by intersecting the view frustum with the near and far plane of the
LDI frustum, and taking the bounding box of the intersection. This
region defines the rays of depth pixels that could be seen in the
We approximate the s as the angles  between the surface nor-
new view. This computation is conservative, and gives suboptimal
mal vector and the z axes of the camera’s coordinate systems. We
results when the viewer is looking at the LDI from the side (see
also approximate d2 by Z2 , the z coordinate of the sampled point
Figure 9). The view frustum intersects almost the entire cross sec-
in the output camera’s unprojected eye coordinate system. During
tion of the LDI frustum, but only those depth pixels in the desired
rendering, we set the projection matrix such that z2 = 1=Z2 .
view need be warped. Our simple clipping test indicates that most
The current implementation supports 4 different splat sizes, so a of the LDI needs to be warped. To alleviate this, we split the LDI
very crude approximation of the size computation is implemented into two segments, a near and a far segment (see Figure 9). These
using a lookup table. For each pixel in the LDI, we store d1 using are simply two frustra stacked one on top of the other. The near
5 bits. We use 6 bits to encode the normal, 3 for nx , and 3 for ny . frustum is kept smaller than the back segment. We clip each seg-
This gives us an eleven-bit lookup table index. Before rendering ment individually, and render the back segment first and the front
each new image, we use the new output camera information to pre- segment second. Clipping can speed rendering times by a factor of
p
compute values for the 2048 possible lookup table indexes. At each 2 to 4.
pixel we obtain size by multiplying the computed z2 by the value
found in the lookup table. 6 Results
p
size  z2  lookup[nx, ny, d1] Sprites with Depth and Layered Depth Images have been imple-
mented in C++. The color figures show two examples of rendering
To maintain the accuracy of the approximation for d1 , we discretize sprites and three examples of rendering LDIs. Figures 3a through 3j
d1 nonlinearly using a simple exponential function that allocates show the results of rendering a sprite with depth. The hemisphere
more bits to the nearby d1 values, and fewer bits to the distant d1 in the middle of the sprite pops out of the plane of the sprite, and
values. the illusion of depth is quite good. Figure 4 shows the process of
extracting sprites from multiple images using the vision techniques
The four splat sizes we currently use have 1 by 1, 3 by 3, 5 by 5, discussed in Section 3. There is a great deal of parallax between the
and 7 by 7 pixel footprints. Each pixel in a footprint has an alpha layers of sprites, resulting in a convincing and inexpensive image-
value to approximate a Gaussian splat kernel. However, the alpha based-rendering method.
values are rounded to 1, 1/2, or 1/4, so the alpha blending can be
done with integer shifts and adds. Figure 10 shows two views of a barnyard scene modeled in Sof-
timage. A set of 20 images was pre-rendered from cameras that
5.4 Depth Pixel Representation encircle the chicken using the Mental Ray renderer. The renderer
returns colors, depths, and normals at each pixel. The images were
The size of a cache line on current Intel processors (Pentium Pro rendered at 320 by 320 pixel resolution, taking approximately one
and Pentium II) is 32 bytes. To fit four depth pixels into a single minute each to generate. In the interactive system, the 3 images out
cache line we convert the floating point Z value to a 20 bit integer. of the 17 that have the closest direction to the current camera are
This is then packed into a single word along with the 11 bit splat chosen. The preprocessor (running in a low-priority thread) uses
table index. These 32 bits along with the R, G, B, and alpha values these images to create an LDI in about 1 second. While the LDIs
fill out the 8 bytes. This seemingly small optimization yielded a 25 are allocated with a maximum of 10 layers per pixel, the average
percent improvement in rendering speed. depth complexity for these LDIs is only 1.24. Thus the use of three

239
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998

Figure 10 Barnyard scene

Figure 11 Near segment of chestnut tree

Figure 12 Chestnut tree in front of environment map


240
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998

tion that a forward mapped displacement map does not have to be as


accurate as a forward mapped color image. If the displacement map
is smooth, the inaccuracies in the warped displacement map result
in only sub-pixel errors in the final color pixel sample positions.
Our second novel approach to image based rendering is a Layered
Depth Image representation. The LDI representation provides the
means to display the parallax induced by camera motion as well as
reveal disoccluded regions. The average depth complexity in our
LDI’s is much lower that one would achieve using multiple input
images (e.g., only 1.24 in the Chicken LDI). The LDI representation
takes advantage of McMillan’s ordering algorithm allowing pixels
to be splatted back to Front with an over compositing operation.
Traditional graphics elements and planar sprites can be combined
with Sprites with Depth and LDIs in the same scene if a back-to-
front ordering is maintained. In this case they are simply compos-
ited onto one another. Without such an ordering a z-buffer approach
will still work at the extra cost of maintaining depth information per
frame.
Choosing a single camera view to organize the data has the advan-
tage of having sampled the geometry with a preference for views
very near the center of the LDI. This also has its disadvantages.
First, pixels undergo two resampling steps in their journey from in-
put image to output. This can potentially degrade image quality.
Secondly, if some surface is seen at a glancing angle in the LDIs
view the depth complexity for that LDI increases, while the spatial
sampling resolution over that surface degrades. The sampling and
aliasing issues involved in our layered depth image approach are
still not fully understood; a formal analysis of these issues would
be helpful.
With the introduction of our two new representations and render-
ing techniques, there now exists a wide range of different image
based rendering methods available. At one end of the spectrum
are traditional texture-mapped models. When the scene does not
have too much geometric detail, and when texture-mapping hard-
ware is available, this may be the method of choice. If the scene
can easily be partitioned into non-overlapping sprites (with depth),
Figure 13 Dinosaur model reconstructed from 21 photographs then triangle-based texture-mapped rendering can be used without
requiring a z buffer [18, 4].
All of these representations, however, do not explicitly account for
input images only increases the rendering cost by 24 percent. The certain variation of scene appearance with viewpoint, e.g., specu-
fast renderer (running concurrently in a high-priority thread) gener- larities, transparency, etc. View-dependent texture maps [5], and
ates images at 300 by 300 resolution. On a Pentium II PC running 4D representations such as lightfields or Lumigraphs [16, 7], have
at 300MHz, we achieved frame rate of 8 to 10 frames per second. been designed to model such effects. These techniques can lead to
greater realism than static texture maps, sprites, or Layered Depth
Figures 11 and 12 show two cross-eye stereo pairs of a chestnut Images, but usually require more effort (and time) to render.
tree. In Figure 11 only the near segment is displayed. Figure 12
shows both segments in front of an environment map. The LDIs In future work, we hope to explore representations and rendering al-
were created using a modified version of the Rayshade [11] ray- gorithms which combine several image based rendering techniques.
tracer. The tree model is very large; Rayshade allocates over 340 Automatic techniques for taking a 3D scene (either synthesized or
MB of memory to render a single image of the tree. The stochastic real) and re-representing it in the most appropriate fashion for im-
method discussed in Section 4.2 took 7 hours to trace 16 million age based rendering would be very useful. These would allow us to
rays through this scene using an SGI Indigo2 with a 250 MHz pro- apply image based rendering to truly complex, visually rich scenes,
cessor and 320MB of memory. The resulting LDI has over 1.1 mil- and thereby extend their range of applicability.
lion depth pixels, 70,000 of which were placed in the near segment
with the rest in the far segment. When rendering this interactively Acknowledgments
we attain frame rates between 4 and 10 frames per second on a
Pentium II PC running at 300MHz. The authors would first of all like to thank Michael F. Cohen. Many
of the original ideas contained in this paper as well as much of the
7 Discussion discussion in the paper itself can be directly attributable to him.
The authors would also like to thank Craig Kolb for his help in ob-
In this paper, we have described two novel techniques for image taining and modifying Rayshade. Radomir Mech and Przemyslaw
based rendering. The first technique renders Sprites with Depth Prusinkiewicz provided the model of the chestnut tree. Steve Seitz
without visible gaps, and with a smoother rendering than traditional is responsible for creating the LDI of the dinosaur from a modified
forward mapping (splatting) techniques. It is based on the observa- version of his earlier code. Andrew Glassner was a great help with

241
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998

some of the illustrations in the paper. Finally, we would like to [19] Nelson Max. Hierarchical Rendering of Trees from Precomputed
thank Microsoft Research for helping to bring together the authors Multi-Layer Z-Buffers. In Xavier Pueyo and Peter Schröder, editors,
to work on this project. Eurographics Rendering Workshop 1996, pages 165–174. Eurograph-
ics, Springer Wein, New York City, NY, June 1996.

References [20] Leonard McMillan. Computing Visibility Without Depth. Technical


Report 95-047, University of North Carolina, 1995.
[1] S. Baker, R. Szeliski, and P. Anandan. A Layered Approach to Stereo [21] Leonard McMillan. A List-Priority Rendering Algorithm for Redis-
Reconstruction. In IEEE Computer Society Conference on Computer playing Projected Surfaces. Technical Report 95-005, University of
Vision and Pattern Recognition (CVPR’98). Santa Barbara, June 1998. North Carolina, 1995.
[2] Shenchang Eric Chen and Lance Williams. View Interpolation for Im-
[22] Leonard McMillan and Gary Bishop. Plenoptic Modeling: An Image-
age Synthesis. In James T. Kajiya, editor, Computer Graphics (SIG-
Based Rendering System. In Robert Cook, editor, SIGGRAPH 95
GRAPH ’93 Proceedings), volume 27, pages 279–288. August 1993.
Conference Proceedings, Annual Conference Series, pages 39–46.
[3] William Dally, Leonard McMillan, Gary Bishop, and Henry Fuchs. ACM SIGGRAPH, Addison Wesley, August 1995.
The Delta Tree: An Object Centered Approach to Image Based Ren-
[23] Leonard McMillan and Gary Bishop. Shape as a Pertebation to Projec-
dering. AI technical Memo 1604, MIT, 1996.
tive Mapping. Technical Report 95-046, University of North Carolina,
[4] Lucia Darsa, Bruno Costa Silva, and Amitabh Varshney. Navigating 1995.
Static Environments Using Image-Space Simplification and Morph-
ing. In Proc. 1997 Symposium on Interactive 3D Graphics, pages [24] Don P. Mitchell. personal communication. 1997.
25–34. 1997. [25] Matt Pharr, Craig Kolb, Reid Gershbein, and Pat Hanrahan. Ren-
[5] Paul E. Debevec, Camillo J. Taylor, and Jitendra Malik. Modeling dering Complex Scenes with Memory-Coherent Ray Tracing. In
and Rendering Architecture from Photographs: A Hybrid Geometry- Turner Whitted, editor, SIGGRAPH 97 Conference Proceedings, An-
and Image-Based Approach. In Holly Rushmeier, editor, SIGGRAPH nual Conference Series, pages 101–108. ACM SIGGRAPH, Addison
96 Conference Proceedings, Annual Conference Series, pages 11–20. Wesley, August 1997.
ACM SIGGRAPH, Addison Wesley, August 1996. [26] H. S. Sawhney. 3D Geometry from Planar Parallax. In IEEE Com-
[6] O. Faugeras. Three-dimensional computer vision: A geometric view- puter Society Conference on Computer Vision and Pattern Recognition
point. MIT Press, Cambridge, Massachusetts, 1993. (CVPR’94), pages 929–934. IEEE Computer Society, Seattle, Wash-
ington, June 1994.
[7] Steven J. Gortler, Radek Grzeszczuk, Richard Szeliski, and Michael F.
Cohen. The Lumigraph. In Holly Rushmeier, editor, SIGGRAPH [27] Gernot Schaufler and Wolfgang Stürzlinger. A Three-Dimensional
96 Conference Proceedings, Annual Conference Series, pages 43–54. Image Cache for Virtual Reality. In Proceedings of Eurographics ’96,
ACM SIGGRAPH, Addison Wesley, August 1996. pages 227–236. August 1996.
[8] Paul S. Heckbert. Survey of Texture Mapping. IEEE Computer [28] Mark Segal, Carl Korobkin, Rolf van Widenfelt, Jim Foran, and
Graphics and Applications, 6(11):56–67, November 1986. Paul E. Haeberli. Fast shadows and lighting effects using texture map-
ping. In Edwin E. Catmull, editor, Computer Graphics (SIGGRAPH
[9] Paul S. Heckbert and Henry P. Moreton. Interpolation for Polygon ’92 Proceedings), volume 26, pages 249–252. July 1992.
Texture Mapping and Shading. In David Rogers and Rae Earnshaw,
editors, State of the Art in Computer Graphics: Visualization and [29] Steven M. Seitz and Charles R. Dyer. View Morphing: Synthesizing
Modeling, pages 101–111. Springer-Verlag, 1991. 3D Metamorphoses Using Image Transforms. In Holly Rushmeier, ed-
itor, SIGGRAPH 96 Conference Proceedings, Annual Conference Se-
[10] Youichi Horry, Ken ichi Anjyo, and Kiyoshi Arai. Tour Into the ries, pages 21–30. ACM SIGGRAPH, Addison Wesley, August 1996.
Picture: Using a Spidery Mesh Interface to Make Animation from
a Single Image. In Turner Whitted, editor, SIGGRAPH 97 Confer- [30] Steven M. seitz and Charles R. Dyer. Photorealistic Scene Recon-
ence Proceedings, Annual Conference Series, pages 225–232. ACM struction by Voxel Coloring. In Proc. Computer Vision and Pattern
SIGGRAPH, Addison Wesley, August 1997. Recognition Conf., pages 1067–1073. 1997.
[11] Craig E. Kolb. Rayshade User’s Guide and Reference Manual. [31] Jonathan Shade, Dani Lischinski, David Salesin, Tony DeRose, and
http://graphics.stanford.edu/ cek/rayshade, 1992. John Snyder. Hierarchical Image Caching for Accelerated Walk-
throughs of Complex Environments. In Holly Rushmeier, editor,
[12] R. Kumar, P. Anandan, and K. Hanna. Direct recovery of shape from
SIGGRAPH 96 Conference Proceedings, Annual Conference Series,
multiple views: A parallax based approach. In Twelfth International
pages 75–82. ACM SIGGRAPH, Addison Wesley, August 1996.
Conference on Pattern Recognition (ICPR’94), volume A, pages 685–
688. IEEE Computer Society Press, Jerusalem, Israel, October 1994. [32] Jay Torborg and Jim Kajiya. Talisman: Commodity Real-time 3D
[13] Anthony G. LaMarca. Caches and Algorithms. Ph.D. thesis, Univer- Graphics for the PC. In Holly Rushmeier, editor, SIGGRAPH 96
sity of Washington, 1996. Conference Proceedings, Annual Conference Series, pages 353–364.
ACM SIGGRAPH, Addison Wesley, August 1996.
[14] S. Laveau and O. D. Faugeras. 3-D Scene Representation as a Col-
lection of Images. In Twelfth International Conference on Pattern [33] J. Y. A. Wang and E. H. Adelson. Layered Representation for Motion
Recognition (ICPR’94), volume A, pages 689–691. IEEE Computer Analysis. In IEEE Computer Society Conference on Computer Vision
Society Press, Jerusalem, Israel, October 1994. and Pattern Recognition (CVPR’93), pages 361–366. New York, New
York, June 1993.
[15] Jed Lengyel and John Snyder. Rendering with Coherent Layers. In
Turner Whitted, editor, SIGGRAPH 97 Conference Proceedings, An- [34] Lee Westover. Footprint Evaluation for Volume Rendering. In Forest
nual Conference Series, pages 233–242. ACM SIGGRAPH, Addison Baskett, editor, Computer Graphics (SIGGRAPH ’90 Proceedings),
Wesley, August 1997. volume 24, pages 367–376. August 1990.
[16] Marc Levoy and Pat Hanrahan. Light Field Rendering. In Holly Rush- [35] G. Wolberg. Digital Image Warping. IEEE Computer Society Press,
meier, editor, SIGGRAPH 96 Conference Proceedings, Annual Con- Los Alamitos, California, 1990.
ference Series, pages 31–42. ACM SIGGRAPH, Addison Wesley, Au-
gust 1996.
[17] Mark Levoy and Turner Whitted. The Use of Points as a Display
Primitive. Technical Report 85-022, University of North Carolina,
1985.
[18] William R. Mark, Leonard McMillan, and Gary Bishop. Post-
Rendering 3D Warping. In Proc. 1997 Symposium on Interactive 3D
Graphics, pages 7–16. 1997.

242

You might also like