0% found this document useful (0 votes)
82 views

Toward A Blurry Rasterizer: Jacob Munkberg

This document summarizes a talk on stochastic rasterization techniques for rendering motion blur and depth of field effects in real-time graphics. It discusses modifying the graphics pipeline to output multiple vertex positions and sample in space, time, and over the lens. Key challenges addressed are efficiently testing many more samples per pixel while culling invalid samples spatially and temporally. Approaches described include bounding moving primitives, using time intervals, and interleaved rasterization across multiple passes. The goal is to enable realistic camera effects with high quality and low performance cost.

Uploaded by

yurymik
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views

Toward A Blurry Rasterizer: Jacob Munkberg

This document summarizes a talk on stochastic rasterization techniques for rendering motion blur and depth of field effects in real-time graphics. It discusses modifying the graphics pipeline to output multiple vertex positions and sample in space, time, and over the lens. Key challenges addressed are efficiently testing many more samples per pixel while culling invalid samples spatially and temporally. Approaches described include bounding moving primitives, using time intervals, and interleaved rasterization across multiple passes. The goal is to enable realistic camera effects with high quality and low performance cost.

Uploaded by

yurymik
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Toward a Blurry Rasterizer

Jacob Munkberg
Intel Corporation

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Goals
Realistic camera model
Finite aperture: Depth of field Slow shutter: Motion blur

Ease of use
Set camera parameters and
per-vertex motion vectors

High quality & low cost


Beyond Programmable Shading Course, ACM SIGGRAPH 2011

This Talk
Covers:
Stochastic rasterization for motion blur & depth of field

Not covering:
Post-processing techniques [Potmesil81, Max85, Shinya93, Sousa2008, ...] Layered depth image approaches [Lee2009, Lee2010, ...] Blurry ray tracing [Cook84, Walter2006, Wald2007, Hachisuka2008,
Overbeck2009, Hou2010, Grnshlo2011, ...]

Analytical visibility [Korein83, Sung2002, Gribel2010, Gribel2011, ...] Reconstruction & filtering [Egan2009, Soler2009, Parker2010, Lehtinen2011, ...]
For in-depth discussion of micropolygon rasterization with blur and defocus, please refer to the BPS 2009 course and Kayvon Fatahalians PhD thesis [Fatahalian2011]. Motion Blur Rendering: State of the Art [Navarro et al.2011]

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Motivation
Depth of Field
Direct viewers attention Match footage with CGI

Motion Blur
Enhance effect of motion Reduce temporal aliasing
Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Temporal Aliasing

Beyond Programmable Shading Course, ACM SIGGRAPH 2011


Avoid jerky animations at low frame rates. Temporal aliasing is as important as spatial aliasing Increasing the frame rate decreases temporal aliasing, but it is still there. Crucial for feature lm (that only now start talking about 48 fps).

Motion Blur Rendering


Pixel color is an integral over (x,y,t)

y
pixel

y x

t
Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Instead of searching for an analytical solution (very hard), we estimate the integral by Monte Carlo techniques.

Depth of Field Rendering


Pixel color is an integral over (x,y,u,v)
object in focus object out of focus

pixel

image plane (x,y)

lens (u,v)

focus plane

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Depth of Field and Motion Blur


Pixel color is an integral over (x,y,u,v,t)
object in focus object out of focus

pixel t=0 image plane (x,y) lens (u,v) focus plane t=1
Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Sampling
Approximating the visibility integral

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Approximate Integral
Accumulation buffering
Render scene at many fixed
times (and lens positions)
[Korein83, Haeberli90]

uniform

Stochastic sampling [Cook86]


Sample pixel at non-uniformly
spaced locations in (x,y,t,u,v)

Aliasing is replaced by noise

stochastic

Beyond Programmable Shading Course, ACM SIGGRAPH 2011


Visibility integral in (u,v,t) is very hard to solve analytically. Monte Carlo technique: Sample the integral at many non-uniformly spaced locations. Stochastic sampling to avoid correlation between dimensions and extends to higher dimensions. Average N samples per pixel with different (x,y,t) -> spatial & temporal anti-aliasing! Use stratied sampling to reduce noise. Higher correlation between dimensions=lower noise, but more bias

Interleaved Sampling [Keller2001]


N fixed sample times
Position triangle and
sample sparsely in (x,y)
t=0

Improvement over acc.


buffering as (x,y) pos vary from tile to tile

t=0.3

t=0.6

5D: use (ui,vi,ti) tuples


[Fatahalian2009]
The xed set of times is distributed in a screen space tile. Fatahalian et al. [2009] use this for upoly mb and dof.

t=1.0

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Sampling for Motion Blur (3D)


A set of 64 fixed times
is sufficient to avoid banding artifacts Many samples per pixel needed to reduce noise

4 spp

8 spp

16 spp

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

32 spp

256 spp

High quality sampling patterns available. On-the-y generation possible. Use patterns that maximizes minimum distance in 3D, and have good 2D projections (on screen space x,y) Reconstruction lters [Lehtinen2011] can help tremendously here

Sampling for DOF (4D)


Need many unique lens positions

InterleaveUV (64 lens pos)

InterleaveUV (256 lens pos)

64 random spp

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Furthermore, out-of-focus regions and high frequencies require more visibility samples per pixel to reduce noise levels.

Sampling for MB+DOF (5D)

Need large number of unique (ui,vi,ti) tuples To appear in ACM Harder to find good sample distributions
Images courtesy of Laine et al.[2011] taken from Assassins Creed by Ubisoft

InterleaveUVT 4 spp

Sobol sampling 4 spp

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

InterleaveUVT 16 spp

Sobol sampling 16 spp

Figure 1: Quality comparison between InterleaveUVT using 22 The InterleaveUVT images have 64 unique (u,v,t) positions Harderpixel tiles and a good sampling pattern generated5D space. to nd good sample distributions with stratied samples in using Sobol se-

quences. InterleaveUVT suffers from a limited number of unique (u, v, t) triplets, especially with low sample counts common in re-

Visibility
The rasterizer stage

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Recap: Standard Rasterization

Beyond Programmable Shading Course, ACM SIGGRAPH 2011


Visibility determination in a graphics pipeline - convert geometry to pixels

The Graphics Pipeline


Input Assembler Vertex Shader Hull Shader Tessellator Domain Shader Rasterizer Pixel Shader Output Merger
Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Memory System (buffers, textures)

Rasterization of Static Triangle


Determine which pixels a
triangle covers
1.Project triangle on screen 2.Traverse covered area 3.Test samples
"

! !"#$$%&!'("$

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Traverse in an order that enables coherent texture and memory accesses Projection implies clipping. Clipping also enables xed-point rasterization

Test Samples

Beyond Programmable Shading Course, ACM SIGGRAPH 2011


Test samples = evaluate edge equations

Tiled Traversal
Test tile of pixels for overlap
Reject samples in tiles completely outside triangle Enables coarse occlusion culling, improves coherence

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Important optimization for larger triangles. Basis for a hierarchical rasterizer. Improves texture and memory coherence

Blurry Rasterization

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Blurry Rasterization
Moving triangles
q2 t=0.0 r0 q0 r2 t1=0.37 q1 t=1.0

r1

Triangles out of focus

Beyond Programmable Shading Course, ACM SIGGRAPH 2011


Assume linear vertex motion.Motion blur: Each vertex moves from q_i at t=0 to r_i at t=1 during the shutter interval. For depth of eld, assume a thin lens model DOF: triangle out of focus gets fuzzy on screen. Each vertex p_i seen from the center of lens, is sheared in clip space x and y.

The Modified Graphics Pipeline


Input Assembler
Memory System (buffers, textures)

Output two vertex positions

Vertex Shader Hull Shader Tessellator Domain Shader

Sample in (x,y,u,v,t)

Rasterizer Pixel Shader Output Merger

Beyond Programmable Shading Course, ACM SIGGRAPH 2011


Modications needed: Output two vertex positions from vertex shader (start and end of motion), modify rasterizer to sample in time and over the lens. Pass along render state so that the rasterizer knows about lens size and focal plane. One option is to do stochastic sampling in the pixel shader [McGuire2010], but it has some limitations and performance issues. Here, we focus on modication of the HW rasterizer.

Clipping Moving Triangles


Clipping moving primitives is very difficult
Vertices intersect the frustum at different times
t=1 t=1 t=0.8 near plane t=0.6 z=0 t=0 t=0.7

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Alternative 1: Use a xed set of sample times, clip triangle at each time Alternative 2: Homogeneous rasterization Linear interpolation breaks down

Screen-Space Bounds
Bound full motion and all lens positions Handle moving triangles intersecting z=0 with care
t=0 t=1 t=1

image plane z=0 t=0

DOF BBox
Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Spatio-Temporal Visibility
Moving triangle may cover large
region in (x,y,t) More samples per pixel needed for low noise levels We need to quickly cull samples in both space and time
" "

! # #

!
Beyond Programmable Shading Course, ACM SIGGRAPH 2011
The key is that many more samples need to be tested, and that the complexity is tied to the amount of blur applied (motion and/or DOF). Dimension on visibility query has increased from two to three (or ve) dimensions

Screen Space Convex Hull


Reduce spatial bounds [McGuire2010]
A tile is only visited once, enabling
multisampling & coarse Z culling No temporal culling
! # ! # " "

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Revert to coarser bounds if moving triangle intersects z=0. The screen space convex hull algorithm only works on projected vertices with z>0.

Interval
Use N time intervals
[Cook90, Fatahalian2009]
" "

Bound triangles movement in


each interval and test all samples in (x,y,t) box
! # ! #

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Multiple tri setup and bounding. One tile visited multiple times

Interleaved Rasterization
Use N fixed sample times
[Keller2001, Fatahalian2009]
"

Position triangle at each time Rasterize coarsely in (x,y) Restricted sampling patterns

! #

Beyond Programmable Shading Course, ACM SIGGRAPH 2011


Similar to accumulation buffering, but only test 1/N of samples in each pass. All temporal samples are found within a screen space tile: Interleaved tile size = N/(samples per pixel) One tile visited many times

Tile-Based Traversal
Tile-test reduces samples to test
Trivially rejects tile or returns
temporal bounds for overlap
[Laine2011, Munkberg2011b]
! # "

Preserves GPU traversal order

!
Beyond Programmable Shading Course, ACM SIGGRAPH 2011

A tile is only visited once, enabling multisampling & coarse Z culling Extends standard GPU pipeline

5D Rasterization
Interval [Cook90, Fatahalian2009]
Many strata in (u,v,t). Find (x,y) bounds for each stratum

Interleaved sampling [Keller2001, Fatahalian2009]


Generate fixed set of (u,v,t) tuples and
rasterize coarsely in (x,y)

Tile-based traversal with (u,v,t) bounds [Laine2011]


Tile-test returns bounds for u,v and t individually
Beyond Programmable Shading Course, ACM SIGGRAPH 2011
Interleave needs a large set of tuples. Laine: Tile-bounds for t computed using entire lens bounds. Tile-bounds for (u,v) computed using full motion trail. Lower efficiency in 5D than 3D and 4D tests.

Tile-Overlap Tests
Find time interval when moving triangle overlaps
with the screen space tile [Laine2011, Munkberg2011b]

t=0

t=1

[0,1/4]

[0,1/2]

[1/4,3/4]

[1/2,1]

[3/4, 1]

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Examples: Dual-space bounds [Laine], Tile frustum vs moving AABB & Tile vs moving triangle edges [Munkberg].

Tile-Overlap Test for DOF


objects in focus objects out of focus

pixel

image plane

lens

focus plane
objects out of focus

Idea: Only test lens region that sees the object through a certain screen space tile

lens

focus plane
Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Lens Bounds Per Tile


Separating plane tests determine active lens
region [Akenine-Mller2011, Laine2011]
lens separating lines triangle lens separating lines

tile

triangle

tile

focus plane

focus plane
Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Shading

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Shading Static Primitives


Shading after visibility
Shade per 2x2 pixels in screen space

Shading before visibility [Cook87]


Shade in object space per vertex Decoupled shading and visibility
Beyond Programmable Shading Course, ACM SIGGRAPH 2011
GPU Derivatives by nite differences. Supports Multi Sampling Anti-Aliasing (MSAA) Shade once per pixel but take many visibility samples. Very efficient for large primitives Reyes: Shading before visibility. Shade in object space per vertex. Shade a grid of vertices. Efficient SIMD shading. Derivatives by nite differences. Requires small primitives. Small to large triangles: Hierarchical rasterization needed, but need explicit cache for decoupled sampling and shading Very small triangles. Hierarchical rasterization only efficient for large blurs Shading at vertex frequency sufficient. Supports decoupled sampling and shading by design

Shading Moving Primitives


Supersampling
Shade each visibility sample Prohibitively expensive

Extend MSAA [McGuire2010]


Motion blur example
Shade once per covered pixel at a certain time Texture filters take motion into account [Loviscach2005]
Beyond Programmable Shading Course, ACM SIGGRAPH 2011
Approaches: Supersampling: Shade every visibility sample. No restrictions on traversal order Captures temporally varying shading MSAA for motion blur and DOF: Extend standard MSAA to motion blur & depth of eld May shade arbitrary far outside triangle. Additional shading samples for temporal derivative Requires a tile-based traversal order Decoupled Shading: Mapping function between visibility samples and shading samples. Map multiple visibility samples to one shading point Per-vertex shading (requires pixel-sized primitives)

MSAA Failure Case


Reverts to supersampling for large motion
Each sample hits a new triangle
t=0 Movement

t=1/4

t=1/2

t=3/4
Beyond Programmable Shading Course, ACM SIGGRAPH 2011

A different sample time is assigned to each of the four multi-samples

Decoupled Sampling [Ragan-Kelley2011, Burns2010]


Many-to-one mapping function
Between visibility and shading samples

Low shading rate


Even for large motion and defocus

Needs cache for shaded values


Remapping logic per sample
Beyond Programmable Shading Course, ACM SIGGRAPH 2011
Assume constant shading in time. In general: design (many to one) mapping function between visibility samples and shading samples. Remap sample to a xed time and center of lens Use remapped location as cache key If cache hit, fetch color, else shade and store color

Decoupled Sampling Example


remap to triangle at t=0 visibility samples snap remapped visibility sample to closest shading sample

t=1
T=1

t=0
T=0

T=0

screen space

barycentric space

shading space

Beyond Programmable Shading Course, ACM SIGGRAPH 2011


Each visibility sample has a position in (x,y,t). Some of them hit the triangle. Compute the barycentric coordinate of the hit point. Remap to the triangle at t=0 and snap to the closest shading point in some shading space. Here, the shading points are in the center of the pixel.

Reconstruction
Reduce noise in stochastic rendering
May drastically reduce required #samples per pixel Store additional information with samples [Lehtinen2011]
Images courtesy of Lehtinen et al.[2011]

Input image (1 spp)

Reconstructed

Reference (256 spp)

Beyond Programmable Shading Course, ACM SIGGRAPH 2011


Important emerging research eld for stochastic rendering. Can be seen as an alternative to shading caches. One shading value is reused for many (reconstructed) visibility samples. Lethinen et al store motion vector and sample depth. Enables on-the-y reprojection of stochastic samples during reconstruction. Egan & Soler: Design sheared lter by looking at frequency domain. Adaptive sampling based on predicted bandwidth.

View frustum culled

Culling Backface
culled Occlusion culled

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

View Frustum Culling


Solve for temporal interval when triangle is
inside the view frustum [Laine2011, Munkberg2011b]
t=1 t=1 t=0.6 near plane z=0

t=0
Beyond Programmable Shading Course, ACM SIGGRAPH 2011
Test entire moving triangle Gives smaller screen space bboxes (and better dual-space bounds)

Motion Blur Backface Culling


Backface test is cubic
polynomial in t [Munkberg2011a]
Not conservative to test
facing at t=0 and t=1
p2 p2 p0 p0 p1 p1 t=0 t=0.5 t=1
Beyond Programmable Shading Course, ACM SIGGRAPH 2011

p1 p0 p2

The backface test for a triangle with linear vertex motion is cubic polynomial in t. Cubic polynomial -> up to three zeros -> up to three changes in the sign of the signed area

DOF Backface Culling


BF cull at center of lens

Correct BF cull

Lens
Beyond Programmable Shading Course, ACM SIGGRAPH 2011

DOF Backface Culling


BF cull at center of lens

Correct BF cull

Lens
Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Occlusion Culling for Motion Blur


Standard z-culling less efficient Spatio-temporal occlusion culling
[Akenine-Mller2007, Boulos2010]
Illustration adapted from Boulos et al.[2010]

t x

solid occluder valid for t=[0,1]

solid occluder valid for t=ti


Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Left: Solid occluder is minimal. Right: In the other extreme, a separate occlusion hierarchy per sample time. Middle ground probably most practical approach. In GPU-like pipeline, want coarse z-cull at tile level. A full spatio-temporal z-hierarchy improves culling, but is likely too expensive in terms of memory and bandwidth usage Conclusion: Tile based traversal is preferable. Efficient 5D occlusion culling is an open research eld

Conclusions
Stochastic rasterization does not come for free
Many more sample tests and shading evaluations Higher color, depth & texture bandwidth Less efficient culling and compression

Efficient shading is a requirement Very easy for end-user if implemented in HW


Beyond Programmable Shading Course, ACM SIGGRAPH 2011

Q&A
Thank You:
The Advanced Rendering Technology team at Intel The Graphics Group at Lund University Mike Doggett, Jonathan Ragan-Kelley, Samuli Laine,
and Timo Aila for fruitful discussions

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

References
[Akenine-Mller2007] Tomas Akenine-Mller, Jacob Munkberg, and Jon Hasselgren, "Stochastic Rasterization using Time-Continuous Triangles", Graphics Hardware, August 2007. [Akenine-Mller2011] Tomas Akenine-Mller, Robert Toth, Jacob Munkberg, and Jon Hasselgren, "Efcient Depth of Field Rasterization using a Tile Test based on Half-Space Culling", technical report, June 2011 [Boulos2010] Solomon Boulos, Edward Luong, Kayvon Fatahalian, Henry Moreton and Pat Hanrahan Space-Time Hierarchical Occlusion Culling for Micropolygon Rendering with Motion Blur High Performance Graphics 2010 [Burns2010] Burns, C.A., Fatahalian, K., Mark, W.R. A Lazy Object-Space Shading Architecture With Decoupled Sampling. High Performance Graphics, 19-28. 2010 [Cook84] Robert L. Cook, Thomas Porter, Loren Carpenter, Distributed Ray Tracing SIGGRAPH 1984 [Cook86] Robert L. Cook. Stochastic Sampling in Computer Graphics. ACM Transactions on Graphics, Volume 6, Number 1, January 1996. [Cook87] Robert L. Cook, Loren Carpenter, Edwin Catmull. The Reyes Rendering Architecture. 1987 [Cook90] Robert L. Cook, Thomas Porter, Loren Carpenter. Pseudo-random point sampling techniques in computer graphics. United States Patent 4,897,806. 1990. [Egan2009] Kevin Egan , Yu-Ting Tseng , Nicolas Holzschuch , Frdo Durand , Ravi Ramamoorthi, Frequency analysis and sheared reconstruction for rendering motion blur, ACM Transactions on Graphics (TOG), v.28 n.3, August 2009 [Fatahalian2009] Kayvon Fatahalian, Edward Luong, Solomon Boulos, Kurt Akeley, William R. Mark and Pat Hanrahan. Data-Parallel Rasterization of Micropolygons with Defocus and Motion Blur High Performance Graphics 2009 [Fatahalian2011] Kayvon Fatahalian Evolving the Real-Time Graphics Pipeline for Micropolygon Rendering. Ph.D. Dissertation, Stanford University, 2011 [Grunschlo2011] Leonhard Grunschlo, Martin Stich, Sehera Nawaz, Alexander Keller. MSBVH: An Efficient Acceleration Data Structure for Ray Traced Motion Blur High Performance Graphics 2011

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

References cont.
[Gribel2010] Carl Johan Gribel, Michael Doggett, and Tomas Akenine-Mller, "Analytical Motion Blur Rasterization using Compression", High Performance Graphics, pp. 163-172, June 2010 [Gribel2011] Carl Johan Gribel, Rasmus Barringer, and Tomas Akenine-Mller, "High-Quality Spatio-Temporal Rendering using Semi-Analytical Visibility", SIGGRAPH 2011. [Hachisuka2008] Toshiya Hachisuka, Wojciech Jarosz, Richard Peter Weistroffer, Kevin Dale, Greg Humphreys, Matthias Zwicker, Henrik Wann Jensen Multidimensional adaptive sampling and reconstruction for ray tracing SIGGRAPH 2008 [Hou2010] Qiming Hou, Hao Qin, Wenyao Li, Baining Guo, Kun Zhou. Micropolygon ray tracing with defocus and motion blur. SIGGRAPH 2010 [Haeberli90] Paul Haeberli , Kurt Akeley, The accumulation buffer: hardware support for high-quality rendering, Proceedings of the 17th annual conference on Computer graphics and interactive techniques, p.309-318, 1990 [Korein83] Jonathan Korein , Norman Badler,Temporal anti-aliasing in computer generated animation, Computer Graphics, vol 17, no .3, p.377-388, 1983 [Keller2001] Alexander Keller , Wolfgang Heidrich, Interleaved Sampling, Eurographics Workshop on Rendering Techniques, p.269-276, 2001 [Laine2011] Samuli Laine, Timo Aila, Tero Karras and Jaakko Lehtinen. Clipless Dual-Space Bounds for Faster Stochastic Rasterization. SIGGRAPH 2011 [Lee2009] Sungkill Lee, Elmar Eisemann and Hans-Peter Seidel. Depth-of-field rendering with multiview synthesis. SIGGRAPH Asia 2009 [Lee2010] Sungkill Lee, Elmar Eisemann and Hans-Peter Seidel. Real-time lens blur effects and focus control. SIGGRAPH 2010 [Lehtinen2011] Jaakko Lehtinen, Timo Aila, Jiawen Chen, Samuli Laine and Frdo Durand. Temporal Light Field Reconstruction for Rendering Distribution Effects. SIGGRAPH 2011 [Loviscach2005] Jrn Loviscach, Motion Blur for Textures by Means of Anisotropic Filtering, Eurographics Symposium on Rendering, 2005. pp. 105-110 [Max85] Max N. L., Lerner D. M.: A two-and-a-half-d motion-blur algorithm. SIGGRAPH '85 [McGuire2010] McGuire, Enderton, Shirley, and Luebke, Real-Time Stochastic Rasterization on Conventional GPU Architectures, High Performance Graphics, June, 2010.

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

References cont.
[Munkberg2011a] Jacob Munkberg and Tomas Akenine-Mller, "Backface Culing for Motion Blur and Depth of Field", journal of gpu, graphics and game tools, 2011. [Munkberg2011b] Jacob Munkberg, Petrik Clarberg, Jon Hasselgren, Robert Toth, Masamichi Sugihara, and Tomas Akenine-Mller, Hierarchical Stochastic Motion Blur Rasterization, High Performance Graphics 2011. [Navarro2011] Fernando Navarro, Francisco J. Sern and Diego Guitierrez. Motion Blur Rendering: State of the Art. Computer Graphics Forum Volume 30 (2011), number 1 , pp. 3-26 [Overbeck2009] Ryan S. Overbeck , Craig Donner , Ravi Ramamoorthi, Adaptive Wavelet Rendering, ACM Transactions on Graphics (TOG), v.28 n.5, December 2009 [Potmesil81] Potmesil M. Chakravarty I: A lens and aperture camera model for synthetic image generation. SIGGRAPH 81 pp. 389-399 [Ragan-Kelley2011] Jonathan Ragan-Kelley, Jaakko Lehtinen, Jiawen Chen, Michael Doggett, Frdo Durand. Decoupled Sampling for Graphics Pipelines. SIGGRAPH 2011 [Shinya93] Shinya M.: Spatial anti-aliasing for animation sequences with spatio-temporal filtering. SIGGRAPH 93 pp. 289-296 [Shirley2011] Shirley, Aila, Cohen, Eric Enderton, Laine, Luebke, McGuire, A Local Image Reconstruction Algorithm for Stochastic Rendering, ACM Symposium on Interactive 3D Graphics and Games (I3D 2011 proceedings), February 2011 [Soler2009] Cyril Soler, Kartic Subr, Frdo Durand, Nicolas Holzschuch, Franois Sillion. Fourier depth of field. ACM Transactions on Graphics (TOG) , Volume 28 Issue 2, 2009 [Sousa2008] Sousa T.: Crysis next-gen effects. In Proceedings of the Game Developers Conference 2008. [Sung2002] K. Sung , A. Pearce , C. Wang, Spatial-Temporal Antialiasing, IEEE Transactions on Visualization and Computer Graphics, v.8 n.2, p.144-153, April 2002 [Wald2007] Wald I., Mark. W. R., Gnther J., Boulos S., Ize T., Hunt W., Parker S.G.,Shirley P.: State of the art in ray tracing animated scenes. Eurographics 2007 State of the Art Reports [Walter2006] Bruce Walter, Adam Arbree, Kavita Bala, Donald P. Greenberg. Multidimensional Lightcuts. SIGGRAPH 2006

Beyond Programmable Shading Course, ACM SIGGRAPH 2011

You might also like