08.17.layered Depth Images
08.17.layered Depth Images
Abstract errors increase with the amount of depth variation in the real part
of the scene captured by the sprite. The amount of virtual camera
In this paper we present a set of efficient image based rendering motion away from the point of view of sprite creation also increases
methods capable of rendering multiple frames per second on a PC. the error. Errors decrease with the distance of the geometry from
The first method warps Sprites with Depth representing smooth sur- the virtual camera.
faces without the gaps found in other techniques. A second method
for more general scenes performs warping from an intermediate The second recent extension is to add depth information to an im-
representation called a Layered Depth Image (LDI). An LDI is a age to produce a depth image and to then use the optical flow that
view of the scene from a single input camera view, but with mul- would be induced by a camera shift to warp the scene into an ap-
tiple pixels along each line of sight. The size of the representa- proximation of the new view [2, 22].
tion grows only linearly with the observed depth complexity in the Each of these methods has its limitations. Simple sprite warping
scene. Moreover, because the LDI data are represented in a single cannot produce the parallax induced when parts of the scenes have
image coordinate system, McMillan’s warp ordering algorithm can sizable differences in distance from the camera. Flowing a depth
be successfully adapted. As a result, pixels are drawn in the output image pixel by pixel, on the other hand, can provide proper parallax
image in back-to-front order. No z-buffer is required, so alpha- but will result in gaps in the image either due to visibility changes
compositing can be done efficiently without depth sorting. This when some portion of the scene become unoccluded, or when a
makes splatting an efficient solution to the resampling problem. surface is magnified in the new view.
Some solutions have been proposed to the latter problem. Laveau
1 Introduction and Faugeras suggest performing a backwards mapping from the
Image based rendering (IBR) techniques have been proposed as an output sample location to the input image [14]. This is an expen-
efficient way of generating novel views of real and synthetic ob- sive operation that requires some amount of searching in the input
jects. With traditional rendering techniques, the time required to image. Another possible solution is to think of the input image as a
render an image increases with the geometric complexity of the mesh of micro-polygons, and to scan-convert these polygons in the
scene. The rendering time also grows as the requested shading com- output image. This is an expensive operation, as it requires a poly-
putations (such as those requiring global illumination solutions) be- gon scan-convert setup for each input pixel [18], an operation we
come more ambitious. would prefer to avoid especially in the absence of specialized ren-
dering hardware. Alternatively one could use multiple input images
The most familiar IBR method is texture mapping. An image is from different viewpoints. However, if one uses n input images, one
remapped onto a surface residing in a three-dimensional scene. Tra- effectively multiplies the size of the scene description by n, and the
ditional texture mapping exhibits two serious limitations. First, the rendering cost increases accordingly.
pixelization of the texture map and that of the final image may be
vastly different. The aliasing of the classic infinite checkerboard This paper introduces two new extensions to overcome both of these
floor is a clear illustration of the problems this mismatch can cre- limitations. The first extension is primarily applicable to smoothly
ate. Secondly, texture mapping speed is still limited by the surface varying surfaces, while the second is useful primarily for very com-
the texture is applied to. Thus it would be very difficult to create plex geometries. Each method provides efficient image based ren-
a texture mapped tree containing thousands of leaves that exhibits dering capable of producing multiple frames per second on a PC.
appropriate parallax as the viewpoint changes. In the case of sprites representing smoothly varying surfaces, we
Two extensions of the texture mapping model have recently been introduce an algorithm for rendering Sprites with Depth. The algo-
presented in the computer graphics literature that address these two rithm first forward maps (i.e., warps) the depth values themselves
difficulties. The first is a generalization of sprites. Once a complex and then uses this information to add parallax corrections to a stan-
scene is rendered from a particular point of view, the image that dard sprite renderer.
would be created from a nearby point of view will likely be similar. For more complex geometries, we introduce the Layered Depth Im-
In this case, the original 2D image, or sprite, can be slightly altered age, or LDI, that contains potentially multiple depth pixels at each
by a 2D affine or projective transformation to approximate the view discrete location in the image. Instead of a 2D array of depth pixels
from the new camera position [31, 27, 15]. (a pixel with associated depth information), we store a 2D array of
The sprite approximation’s fidelity to the correct new view is highly layered depth pixels. A layered depth pixel stores a set of depth pix-
dependent on the geometry being represented. In particular, the els along one line of sight sorted in front to back order. The front
element in the layered depth pixel samples the first surface seen
along that line of sight; the next pixel in the layered depth pixel
samples the next surface seen along that line of sight, etc. When
rendering from an LDI, the requested view can move away from
the original LDI view and expose surfaces that were not visible in
the first layer. The previously occluded regions may still be ren-
dered from data stored in some later layer of a layered depth pixel.
There are many advantages to this representation. The size of the
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998
2 Previous Work
Over the past few years, there have been many papers on image
Sprite with Depth based rendering. In [17], Levoy and Whitted discuss rendering
point data. Chen and Williams presented the idea of rendering
Layered from images [2]. Laveau and Faugeras discuss IBR using a back-
Depth wards map [14]. McMillan and Bishop discuss IBR using cylin-
Image drical views [22]. Seitz and Dyer describe a system that allows a
user to correctly model view transforms in a user controlled image
Polygons morphing system [29]. In a slightly different direction, Levoy and
Hanrahan [16] and Gortler et al. [7] describe IBR methods using a
large number of input images to sample the high dimensional radi-
ance function.
232
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998
3D model. These affine transforms are allowed to vary in time as can either be specified in world coordinates, AX + BY + CZ + D = 0,
the position and/or color of the sample points change. Hardware or it can be specified in the sprite’s coordinate system, ax1 + by1 +
considerations for such system are discussed in [32]. cz1 + d = 0. In the former case, we can form a new camera matrix
Ĉ1 by replacing the third row of C1 with the row [A B C D], while
Horry et al. [10] describe a very simple sprite-like system in which in the latter, we can compute Ĉ1 = PC1 , where
a user interactively indicates planes in an image that represent areas
in a given image. Thus, from a single input image and some user 2 3
1 0 0 0
supplied information, they can warp an image and provide approx- 60 1 0 07
P=4
imate three dimensional cues about the scene. a b c d5
The system presented here relies heavily on McMillan’s ordering 0 0 1 0
algorithm [21, 20, 22]. Using input and output camera information,
a warping order is computed such that pixels that map to the same (note that [A B C D] = [a b c d]C1 ).
location in the output image are guaranteed to arrive in back to front In either case, we can write the modified projection equation as
order.
2 3 2 3
In McMillan’s work, the depth order is computed by first finding w1 x1 X
6w1 y1 7 6Y 7
the projection of the output camera’s location in the input camera’s 4w1 d1 5 = Ĉ1 4Z 5 , (2)
image plane, that is, the intersection of the line joining the two cam-
era locations with the input camera’s image plane. The line joining w1 1
the two camera locations is called the epipolar line, and the inter-
where d1 = 0 for pixels on the plane. For pixels off the plane, d1 is
section with the image plane is called an epipolar point [6] (see
the scaled perpendicular distance to the plane (the scale factor is 1
Figure 1). The input image is then split horizontally and vertically
if A2 + B2 + C2 = 1) divided by the pixel to camera distance w1 .
at the epipolar point, generally creating 4 image quadrants. (If the
epipolar point lies off the image plane, we may have only 2 or 1 Given such a sprite, how do we compute the 2D transformation
regions.) The pixels in each of the quadrants are processed in a dif- associated with a novel view Ĉ2 ? The mapping between pix-
ferent order. Depending on whether the output camera is in front els (x1 , y1 , d1 , 1) in the sprite and pixels (w2 x2 , w2 y2 , w2 d2 , w2 ) in
of or behind the input camera, the pixels in each quadrant are pro- the output camera’s image is given by the transfer matrix T1,2 =
cessed either inward towards the epipolar point or outwards away Ĉ2 Ĉ1,1 .
from it. In other words, one of the quadrants is processed left to
right, top to bottom, another is processed left to right, bottom to For a flat sprite (d1 = 0), the transfer equation can be written as
top, etc. McMillan discusses in detail the various special cases that 2 3 2 3
arise and proves that this ordering is guaranteed to produce depth w2 x2 x1
ordered output [20]. 4w2 y2 5 = H1,2 4y1 5 (3)
w2 1
When warping from an LDI, there is effectively only one input cam-
era view. Therefore one can use the ordering algorithm to order the where H1,2 is the 2D planar perspective transformation (homogra-
layered depth pixels visited. Within each layered depth pixel, the phy) obtained by dropping the third row and column of T1,2 . The
layers are processed in back to front order. The formal proof of [20] coordinates (x2 , y2 ) obtained after dividing out w2 index a pixel ad-
applies, and the ordering algorithm is guaranteed to work. dress in the output camera’s image. Efficient backward mapping
techniques exist for performing the 2D perspective warp [8, 35], or
3 Rendering Sprites texture mapping hardware can be used.
Sprites are texture maps or images with alphas (transparent pixels) 3.1 Sprites with Depth
rendered onto planar surfaces. They can be used either for locally
caching the results of slower rendering and then generating new The descriptive power (realism) of sprites can be greatly enhanced
views by warping [31, 27, 32, 15], or they can be used directly as by adding an out-of-plane displacement component d1 at each pixel
drawing primitives (as in video games). in the sprite.1 Unfortunately, such a representation can no longer be
rendered directly using a backward mapping algorithm.
The texture map associated with a sprite can be computed by simply
choosing a 3D viewing matrix and projecting some portion of the Using the same notation as before, we see that the transfer equation
scene onto the image plane. In practice, a view associated with the is now
current or expected viewpoint is a good choice. A 3D plane equa- 2 3 2 3
tion can also be computed for the sprite, e.g., by fitting a 3D plane to w2 x2 x1
the z-buffer values associated with the sprite pixels. Below, we de- 4w2 y2 5 = H1,2 4y1 5 + d1 e1,2 , (4)
rive the equations for the 2D perspective mapping between a sprite w2 1
and its novel view. This is useful both for implementing a back-
ward mapping algorithm, and lays the foundation for our Sprites
where e1,2 is called epipole [6, 26, 12], and is obtained from the
with Depth rendering algorithm.
third column of T1,2 .
A sprite consists of an alpha-matted image I1 (x1 , y1 ), a 4 4 camera
Equation (4) can be used to forward map pixels from a sprite to a
matrix C1 which maps from 3D world coordinates (X, Y, Z, 1) into
new view. Unfortunately, this entails the usual problems associated
the sprite’s coordinates (x1 , y1 , z1 , 1),
with forward mapping, e.g., the necessity to fill gaps or to use larger
2 3 2 3
w1 x1 X
1 The d values can be stored as a separate image, say as 8-bit signed
6w1 y1 7 6Y 7 1
4 w1 z1 5 = C1 4Z 5 , (1) depths. The full precision of a traditional z-buffer is not required, since
w1 1 these depths are used only to compute local parallax, and not to perform
z-buffer merging of primitives. Furthermore, the d1 image could be stored
(z1 is the z-buffer value), and a plane equation. This plane equation at a lower resolution than the color image, if desired.
233
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998
splatting kernels, and the difficulty in achieving proper resampling. to compute where to look up the displacement d3 (x3 , y3 ), and form
Notice, however, that Equation (4) could be used to perform a back- the final address of the source sprite pixel using
ward mapping step by interchanging the 1 and 2 indices, if only we 2 3 2 3
knew the displacements d2 in the output camera’s coordinate frame. w1 x1 w3 x3
4w1 y1 5 = 4w3 y3 5 + d3 (x3 , y3 )e2,1 . (8)
A solution to this problem is to first forward map the displacements w1 w3
d1 , and to then use Equation (4) to perform a backward mapping
step with the new (view-based) displacements. While this may at
first appear to be no faster or more accurate than simply forward We can obtain a quicker, but less accurate, algorithm by omitting
warping the color values, it does have some significant advantages. the first step, i.e., the pure parallax warp from d1 to d3 . If we as-
sume the depth at a pixel before and after the warp will not change
First, small errors in displacement map warping will not be as ev-
significantly, we can use d1 instead of d3 in Equation (8). This still
ident as errors in the sprite image warping, at least if the displace-
gives a useful illusion of 3-D parallax, but is only valid for a much
ment map is smoothly varying (in practice, the shape of a simple
smaller range of viewing motions (see Figure 3).
surface often varies more smoothly than its photometry). If bilinear
or higher order filtering is used in the final color (backward) resam- Another variant on this algorithm, which uses somewhat more stor-
pling, this two-stage warping will have much lower errors than for- age but fewer computations, is to compute a 2-D displacement field
ward mapping the colors directly with an inaccurate forward map. , ,
in the first pass, u3 (x3 , y3 ) = x1 x3 , v3 (x3 , y3 ) = y1 y3 , where
We can therefore use a quick single-pixel splat algorithm followed (x3 , y3 ) is computed using the pure parallax transform in Equation
by a quick hole filling, or alternatively, use a simple 2 2 splat. (6). In the second pass, the final pixel address in the sprite is com-
puted using
The second main advantage is that we can design the forward warp-
ing step to have a simpler form by factoring out the planar perspec-
x1 x u (x , y )
tive warp. Notice that we can rewrite Equation (4) as = 3 + 3 3 3 , (9)
y1 y3 v3 (x3 , y3 )
2 3 2 3
w2 x2 x3 where this time (x3 , y3 ) is computed using the transform given in
4w2 y2 5 = H1,2 4y3 5 , (5) Equation (7).
w2 1
We can make the pure parallax transformation (6) faster by avoid-
with ing the per-pixel division required after adding homogeneous coor-
dinates. One way to do this is to approximate the parallax trans-
2 3 2 3
w3 x3 x1 formation by first moving the epipole to infinity (setting its third
4w3 y3 5 = 4y1 5 + d1 e1,2 , (6) component to 0). This is equivalent to having an affine parallax
w3 1 component (all points move in the same direction, instead of to-
wards a common vanishing point). In practice, we find that this still
where e1,2 = H1,2,1 e1,2 . This suggests that Sprite with Depth ren- provides a very compelling illusion of 3D shape.
dering can be implemented by first shifting pixels by their local Figure 3 shows some of the steps in our two-pass warping algo-
parallax, filling any resulting gaps, and then applying a global ho- rithm. Figures 3a and 3f show the original sprite (color) image and
mography (planar perspective warp). This has the advantage that the depth map. Figure 3b shows the sprite warped with no parallax.
it can handle large changes in view (e.g., large zooms) with only a Figures 3g, 3h, and 3i shows the depth map forward warped with
small amount of gap filling (since gaps arise only in the first step, only pure parallax, only the perspective projection, and both. Fig-
and are due to variations in displacement). ure 3c shows the backward warp using the incorrect depth map d1
(note how dark “background” colors are mapped onto the “bump”),
Our novel two-step rendering algorithm thus proceeds in two whereas Figure 3d shows the backward warp using the correct depth
stages: map d3 . The white pixels near the right hand edge are a result of
using only a single step of gap filling. Using three steps results in
1. forward map the displacement map d1 (x1 , y1 ), using only the the better quality image shown in Figure 3e. Gaps also do not ap-
parallax component given in Equation (6) to obtain d3 (x3 , y3 ); pear for a less quickly slanting d maps, such as the pyramid shown
in Figure 3j.
2a. backward map the resulting warped displacements d3 (x3 , y3 )
using Equation (5) to obtain d2 (x2 , y2 ) (the displacements in
The rendering times for the 256 256 image shown in Figure 3 on a
300 MHz Pentium II are as follows. Using bilinear pixel sampling,
the new camera view); the frame rates are 30 Hz for no z-parallax, 21 Hz for “crude” one-
pass warping (no forward warping of d1 values), and 16 Hz for
2b. backward map the original sprite colors, using both the ho- two-pass warping. Using nearest-neighbor resampling, the frame
mography H2,1 and the new parallax d2 as in Equation (4) rates go up to 47 Hz, 24 Hz, and 20 Hz, respectively.
(with the 1 and 2 indices interchanged), to obtain the image
corresponding to camera C2 . 3.2 Recovering sprites from image sequences
The last two operations can be combined into a single raster scan While sprites and sprites with depth can be generated using com-
over the output image, avoiding the need to perspective warp d3 puter graphics techniques, they can also be extracted from image
into d2 . More precisely, for each output pixel (x2 , y2 ), we compute sequences using computer vision techniques. To do this, we use a
(x3 , y3 ) such that layered motion estimation algorithm [33, 1], which simultaneously
segments the sequence into coherently moving regions, and com-
2 3 2 3 putes a parametric motion estimate (planar perspective transforma-
w3 x3 x2 tion) for each layer. To convert the recovered layers into sprites, we
4w3 y3 5 = H2,1 4y2 5 (7) need to determine the plane equation associated with each region.
w3 1 We do this by tracking features from frame to frame and applying
234
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998
(d) (e)
235
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998
b LayeredDepthImage =
Layered Depth Pixel Camera: camera
Pixels[0..xres-1,0..yres-1]: array of LayeredDepthPixel
Depth Pixel
d The layered depth image contains camera information plus an array
a
c of size xres by yres layered depth pixels. In addition to image data,
each layered depth pixel has an integer indicating how many valid
depth pixels are contained in that pixel. The data contained in the
depth pixel includes the color, the depth of the object seen at that
a2 pixel, plus an index into a table that will be used to calculate a splat
size for reconstruction. This index is composed from a combina-
Pixel tion of the normal of the object seen and the distance from the LDI
C3 C1 C2 camera.
Figure 5 Layered Depth Image In practice, we implement Layered Depth Images in two ways.
When creating layered depth images, it is important to be able to
efficiently insert and delete layered depth pixels, so the Layers ar-
ray in the LayeredDepthPixel structure is implemented as a linked
a standard structure from motion algorithm to recover the camera list. When rendering, it is important to maintain spatial locality of
parameters (viewing matrices) for each frame [6]. Tracking several depth pixels in order to most effectively take advantage of the cache
points on each sprite enables us to reconstruct their 3D positions, in the CPU [13]. In Section 5.1 we discuss the compact render-time
and hence to estimate their 3D plane equations [1]. Once the sprite version of layered depth images.
pixel assignment have been recovered, we run a traditional stereo
algorithm to recover the out-of-plane displacements. There are a variety of ways to generate an LDI. Given a synthetic
scene, we could use multiple images from nearby points of view for
The results of applying the layered motion estimation algorithm to which depth information is available at each pixel. This informa-
the first five images from a 40-image stereo dataset2 are shown in tion can be gathered from a standard ray tracer that returns depth
Figure 4. Figure 4(a) shows the middle input image, Figure 4(b) per pixel or from a scan conversion and z-buffer algorithm where
shows the initial pixel assignment to layers, Figure 4(c) shows the the z-buffer is also returned. Alternatively, we could use a ray tracer
recovered depth map, and Figure 4(e) shows the residual depth map to sample an environment in a less regular way and then store com-
for layer 5. Figure 4(d) shows the recovered sprites. Figure 4(f) puted ray intersections in the LDI structure. Given multiple real
shows the middle image re-synthesized from these sprites, while images, we can turn to computer vision techniques that can infer
Figures 4(g–h) show the same sprite collection seen from a novel pixel correspondence and thus deduce depth values per pixel. We
viewpoint (well outside the range of the original views), both with will demonstrate results from each of these three methods.
and without residual depth-based correction (parallax). The gaps
visible in Figures 4(c) and 4(f) lie outside the area corresponding
4.1 LDIs from Multiple Depth Images
to the middle image, where the appropriate parts of the background
sprites could not be seen. We can construct an LDI by warping n depth images into a common
camera view. For example the depth images C2 and C3 in Figure 5
4 Layered Depth Images can be warped to the camera frame defined by the LDI (C1 in fig-
ure 5). 3 If, during the warp from the input camera to the LDI cam-
While the use of sprites and Sprites with Depth provides a fast era, two or more pixels map to the same layered depth pixel, their
means to warp planar or smoothly varying surfaces, more general Z values are compared. If the Z values differ by more than a preset
scenes require the ability to handle more general disocclusions and epsilon, a new layer is added to that layered depth pixel for each
large amounts of parallax as the viewpoint moves. These needs distinct Z value (i.e., NumLayers is incremented and a new depth
have led to the development of Layered Depth Images (LDI). pixel is added), otherwise (e.g., depth pixels c and d in figure 5),
the values are averaged resulting in a single depth pixel. This pre-
Like a sprite with depth, pixels contain depth values along with their processing is similar to the rendering described by Max [19]. This
colors (i.e., a depth pixel). In addition, a Layered Depth Image (Fig- construction of the layered depth image is effectively decoupled
ure 5) contains potentially multiple depth pixels per pixel location. from the final rendering of images from desired viewpoints. Thus,
The farther depth pixels, which are occluded from the LDI center, the LDI construction does not need to run at multiple frames per
will act to fill in the disocclusions that occur as the viewpoint moves second to allow interactive camera motion.
away from the center.
The structure of an LDI is summarized by the following conceptual 4.2 LDIs from a Modified Ray Tracer
representation:
By construction, a Layered Depth Image reconstructs images of a
scene well from the center of projection of the LDI (we simply dis-
DepthPixel = play the nearest depth pixels). The quality of the reconstruction
ColorRGBA: 32 bit integer from another viewpoint will depend on how closely the distribu-
Z: 20 bit integer tion of depth pixels in the LDI, when warped to the new viewpoint,
SplatIndex: 11 bit integer corresponds to the pixel density in the new image. Two common
events that occur are: (1) disocclusions as the viewpoint changes,
LayeredDepthPixel =
NumLayers: integer 3 Any arbitrary single coordinate system can be specified here. However,
Layers[0..numlayers-1]: array of DepthPixel we have found it best to use one of the original camera coordinate systems.
This results in fewer pixels needing to be resampled twice; once in the LDI
2 Courtesy of Dayton Taylor. construction, and once in the rendering process.
236
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998
origin on the cube face and a direction from the cosine distribution
and (2) surfaces that grow in terms of screen space. For example, and cast the ray into the scene. There are two problems with this
when a surface is edge on to the LDI, it covers no area. Later, it simple scheme. First, such white noise distributions tend to form
may face the new viewpoint and thus cover some screen space. unwanted clumps. Second, since there is no coherence between
rays, complex scenes require considerable memory thrashing since
When using a ray tracer, we have the freedom to sample the scene rays will access the database in a random way [25]. The model
with any distribution of rays we desire. We could simply allow of the chestnut tree seen in the color images was too complex to
the rays emanating from the center of the LDI to pierce surfaces, sample with a pure stochastic method on a machine with 320MB of
recording each hit along the way (up to some maximum). This memory.
would solve the disocclusion problem but would not effectively
sample surfaces edge on to the LDI. Stratified Stochastic. To improve the coherence and distribution
of rays, we employ a stratified scheme. In this method, we divide
What set of rays should we trace to sample the scene, to best ap-
proximate the distribution of rays from all possible viewpoints we
the 4D space of rays uniformly into a grid of N N N N strata.
For each stratum, we cast M rays. Enough coherence exists within
are interested in? For simplicity, we have chosen to use a cubical a stratum that swapping of the data set is alleviated. Typical values
region of empty space surrounding the LDI center to represent the for N and M are 32 and 16, generating approximately 16 million
region that the viewer is able to move in. Each face of the viewing rays per cube face.
cube defines a 90 degree frustum which we will use to define a sin-
gle LDI (Figure 6). The six faces of the viewing cube thus cover Once a ray is chosen, we cast it into the scene. If it hits an object,
all of space. For the following discussion we will refer to a single and that object lies in the LDI’s frustum, we reproject the inter-
LDI. section into the LDI, as depicted in Figure 7, to determine which
layered depth pixel should receive the sample. If the new sample is
Each ray in free space has four coordinates, two for position and within an epsilon tolerance in depth of an existing depth pixel, the
two for direction. Since all rays of interest intersect the cube faces, color of the new sample is averaged with the existing depth pixel.
we will choose the outward intersection to parameterize the position Otherwise, the color, normal, and distance to the sample create a
of the ray. Direction is parameterized by two angles. new depth pixel that is inserted into the Layered Depth Pixel.
Given no a priori knowledge of the geometry in the scene, we as-
sume that every ray intersection the cube is equally important. To 4.3 LDIs from Real Images
achieve a uniform density of rays we sample the positional coor-
dinates uniformly. A uniform distribution over the hemisphere of The dinosaur model in Figure 13 is constructed from 21 pho-
directions requires that the probability of choosing a direction is tographs of the object undergoing a 360 degree rotation on a
proportional to the projected area in that direction. Thus, the di- computer-controlled calibrated turntable. An adaptation of Seitz
rection is weighted by the cosine of the angle off the normal to the and Dyer’s voxel coloring algorithm [30] is used to obtain the LDI
cube face. representation directly from the input images. The regular voxeliza-
tion of Seitz and Dyer is replaced by a view-centered voxelization
Choosing a cosine weighted direction over a hemisphere can be similar to the LDI structure. The procedure entails moving outward
accomplished by uniformly sampling the unit disk formed by the on rays from the LDI camera center and projecting candidate voxels
base of the hemisphere to get two coordinates of the ray direction, back into the input images. If all input images agree on a color, this
say x and y if the z-axis is normal to pthe disk. The third coordinate voxel is filled as a depth pixel in the LDI structure. This approach
, ,
is chosen to give a unit length (z = 1 x2 y2 ). We make the enables straightforward construction of LDI’s from images that do
selection within the disk by first selecting a point in the unit square, not contain depth per pixel.
then applying a measure preserving mapping [24] that maps the unit
square to the unit disk. 5 Rendering Layered Depth Images
Given this desired distribution of rays, there are a variety of ways
to perform the sampling: Our fast warping-based renderer takes as input an LDI along with
its associated camera information. Given a new desired camera po-
Uniform. A straightforward stochastic method would take as input sition, the warper uses an incremental warping algorithm to effi-
the number of rays to cast. Then, for each ray it would choose an ciently create an output image. Pixels from the LDI are splatted
237
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998
into the output image using the over compositing operation. The Surface
size and footprint of the splat is based on an estimated size of the φ1
reprojected pixel.
θ2
5.1 Space Efficient Representation
θ1
When rendering, it is important to maintain the spatial locality of d1
depth pixels to exploit the second level cache in the CPU [13]. To d2
this end, we reorganize the depth pixels into a linear array ordered φ2
from bottom to top and left to right in screen space, and back to Normal
front along a ray. We also separate out the number of layers in each Z2
layered depth pixel from the depth pixels themselves. The layered C1
depth pixel structure does not exist explicitly in this implementa-
tion. Instead, a double array of offsets is used to locate each depth
pixel. The number of depth pixels in each scanline is accumulated
into a vector of offsets to the beginning of each scanline. Within
each scanline, for each pixel location, a total count of the depth
pixels from the beginning of the scanline to that location is main- C2
tained. Thus to find any layered depth pixel, one simply offsets to
the beginning of the scanline and then further to the first depth pixel Figure 8 Values for size computation of a projected pixel.
at that location. This supports scanning in right-to-left order as well
as the clipping operation discussed later.
Let C1 be the 4 4 matrix for the LDI camera. It is composed of
an affine transformation matrix, a projection matrix, and a viewport The warping algorithm proceeds using McMillan’s ordering algo-
matrix, C1 = V1 P1 A1 . This camera matrix transforms a point rithm [21]. The LDI is broken up into four regions above and below
from the global coordinate system into the camera’s projected im- and to the left and right of the epipolar point. For each quadrant,
age coordinate system. The projected image coordinates (x1 , y1 ), the LDI is traversed in (possibly reverse) scan line order. At the
obtained after multiplying the point’s global coordinates by C1 and beginning of each scan line, start is computed. The sign of xincr
dividing out w1 , index a screen pixel address. The z1 coordinate can is determined by the direction of processing in this quadrant. Each
be used for depth comparisons in a z buffer. layered depth pixel in the scan line is then warped to the output
Let C2 be the output camera’s matrix. Define the transfer matrix image by calling Warp. This procedure visits each of the layers in
as T1,2 = C2 C1,1 . Given the projected image coordinates of some
back to front order and computes result to determine its location
point seen in the LDI camera (e.g., the coordinates of a in Figure 5), in the output image. As in perspective texture mapping, a divide is
this matrix computes the image coordinates as seen in the output required per pixel. Finally, the depth pixel’s color is splatted at this
camera (e.g., the image coordinates of a2 in camera C2 in Figure 5). location in the output image.
The following pseudo code summarizes the warping algorithm ap-
2 3 2 3 plied to each layered depth pixel.
x1
x2 w2
6 y 7 6 y w 7
T1,2 4 1 5 = 4 2 2 5 = result
z1 z2 w2
1 w2
procedure Warp (ldpix, start, depth, xincr)
The coordinates (x2 , y2 ) obtained after dividing by w2 , index a pixel for k 0 to dpix.NumLayers-1
z1 ldpix.Layers[k].Z
address in the output camera’s image.
result start + z1 depth
Using the linearity of matrix operations, this matrix multiply can ==cull if the depth pixel goes behind the output camera
be factored to reuse much of the computation from each iteration ==or if the depth pixel goes out of the output cam’s frustum
through the layers of a layered depth pixel; result can be computed if result.w > 0 and IsInViewport(result) then
as result result = result.w
== see next section
2 3 2 3 2 3
sqrtSize z2 lookupTable[ldpix.Layers[k].SplatIndex]
x1 x1 0 splat(ldpix.Layers[k].ColorRGBA, x2, y2, sqrtSize)
6 y1 7 = T 6 y1 7 + z T 6 0 7 = start + z depth
end if
T1,2 4 z1 5 1,2 4
0 5 1 1,2 4 1 5 1
== increment for next layered pixel on this scan line
1 1 0 start start + xincr
end for
To compute the warped position of the next layered depth pixel end procedure
along a scanline, the new start is simply incremented.
238
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998
where d1 is the distance from the sampled surface point to the LDI
camera, fov1 is the field of view of the LDI camera, res1 = (w1 h1 ),1
Desired
where w1 and h1 are the width and height of the LDI, and 1 is the View
LDI
angle between the surface normal and the line of sight to the LDI
camera (see Figure 8). The same terms with subscript 2 refer to the
output camera. Figure 9 LDI with two segments
It will be more efficient to compute an approximation of the square
root of size,
5.5 Clipping
p
p d1 cos(2 )res2 tan(fov1 =2)
size =
1
p The LDI of the chestnut tree scene in Figure 11 is a large data set
d2 cos(1 )res1 tan(fov2 =2) containing over 1.1 million depth pixels. If we naively render this
p LDI by reprojecting every depth pixel, we would only be able to
cos(2 )res2 tan(fov1 =2)
1
dp
1
render at one or two frames per second. When the viewer is close
Z2 cos(1 )res1 tan(fov2 =2) to the tree, there is no need to flow those pixels that will fall out-
p side of the new view. Unseen pixels can be culled by intersecting
d1 cos(2 )res2 tan(fov1 =2)
z2 p the view frustum with the frustum of the LDI. This is implemented
cos(1 )res1 tan(fov2 =2) by intersecting the view frustum with the near and far plane of the
LDI frustum, and taking the bounding box of the intersection. This
region defines the rays of depth pixels that could be seen in the
We approximate the s as the angles between the surface nor-
new view. This computation is conservative, and gives suboptimal
mal vector and the z axes of the camera’s coordinate systems. We
results when the viewer is looking at the LDI from the side (see
also approximate d2 by Z2 , the z coordinate of the sampled point
Figure 9). The view frustum intersects almost the entire cross sec-
in the output camera’s unprojected eye coordinate system. During
tion of the LDI frustum, but only those depth pixels in the desired
rendering, we set the projection matrix such that z2 = 1=Z2 .
view need be warped. Our simple clipping test indicates that most
The current implementation supports 4 different splat sizes, so a of the LDI needs to be warped. To alleviate this, we split the LDI
very crude approximation of the size computation is implemented into two segments, a near and a far segment (see Figure 9). These
using a lookup table. For each pixel in the LDI, we store d1 using are simply two frustra stacked one on top of the other. The near
5 bits. We use 6 bits to encode the normal, 3 for nx , and 3 for ny . frustum is kept smaller than the back segment. We clip each seg-
This gives us an eleven-bit lookup table index. Before rendering ment individually, and render the back segment first and the front
each new image, we use the new output camera information to pre- segment second. Clipping can speed rendering times by a factor of
p
compute values for the 2048 possible lookup table indexes. At each 2 to 4.
pixel we obtain size by multiplying the computed z2 by the value
found in the lookup table. 6 Results
p
size z2 lookup[nx, ny, d1] Sprites with Depth and Layered Depth Images have been imple-
mented in C++. The color figures show two examples of rendering
To maintain the accuracy of the approximation for d1 , we discretize sprites and three examples of rendering LDIs. Figures 3a through 3j
d1 nonlinearly using a simple exponential function that allocates show the results of rendering a sprite with depth. The hemisphere
more bits to the nearby d1 values, and fewer bits to the distant d1 in the middle of the sprite pops out of the plane of the sprite, and
values. the illusion of depth is quite good. Figure 4 shows the process of
extracting sprites from multiple images using the vision techniques
The four splat sizes we currently use have 1 by 1, 3 by 3, 5 by 5, discussed in Section 3. There is a great deal of parallax between the
and 7 by 7 pixel footprints. Each pixel in a footprint has an alpha layers of sprites, resulting in a convincing and inexpensive image-
value to approximate a Gaussian splat kernel. However, the alpha based-rendering method.
values are rounded to 1, 1/2, or 1/4, so the alpha blending can be
done with integer shifts and adds. Figure 10 shows two views of a barnyard scene modeled in Sof-
timage. A set of 20 images was pre-rendered from cameras that
5.4 Depth Pixel Representation encircle the chicken using the Mental Ray renderer. The renderer
returns colors, depths, and normals at each pixel. The images were
The size of a cache line on current Intel processors (Pentium Pro rendered at 320 by 320 pixel resolution, taking approximately one
and Pentium II) is 32 bytes. To fit four depth pixels into a single minute each to generate. In the interactive system, the 3 images out
cache line we convert the floating point Z value to a 20 bit integer. of the 17 that have the closest direction to the current camera are
This is then packed into a single word along with the 11 bit splat chosen. The preprocessor (running in a low-priority thread) uses
table index. These 32 bits along with the R, G, B, and alpha values these images to create an LDI in about 1 second. While the LDIs
fill out the 8 bytes. This seemingly small optimization yielded a 25 are allocated with a maximum of 10 layers per pixel, the average
percent improvement in rendering speed. depth complexity for these LDIs is only 1.24. Thus the use of three
239
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998
241
SIGGRAPH 98, Orlando Florida, July 19-24, 1998 C OMPUTER G RAPHICS Proceedings, Annual Conference Series, 1998
some of the illustrations in the paper. Finally, we would like to [19] Nelson Max. Hierarchical Rendering of Trees from Precomputed
thank Microsoft Research for helping to bring together the authors Multi-Layer Z-Buffers. In Xavier Pueyo and Peter Schröder, editors,
to work on this project. Eurographics Rendering Workshop 1996, pages 165–174. Eurograph-
ics, Springer Wein, New York City, NY, June 1996.
242