07+ +Path+Tracing
07+ +Path+Tracing
Christoph Garth
Scientific Visualization Lab
Motivation
Reminder: Rendering Equation
Z
L(x, ωr ) = Le (x, ωr ) + fr (x, ωi , ωr ) · Li (x, ωi ) · cos(θi ) dωi
Ω
Z
L(x, ωr ) = Le (x, ωr ) + fr (x, ωi , ωr ) · Li (x, ωi ) · cos(θi ) dωi
Ω
Whitted ray tracing has avoided or very roughly approximated the actual
integration by simplifying assumptions:
Z
L(x, ωr ) = Le (x, ωr ) + fr (x, ωi , ωr ) · Li (x, ωi ) · cos(θi ) dωi
Ω
Whitted ray tracing has avoided or very roughly approximated the actual
integration by simplifying assumptions:
This chapter: direct approximation of the integral using Monte Carlo integration.
3 1 3 1
1 1 1
3 2 3 3 4
Z Z
x dx = x =1 and E[X] = x · x2 dx = x =0
−1 2 2 3 −1 −1 2 8 −1
Interpretation: If enough samples are taken, the arithmetic mean of the random
variables converges to the expected value.
The variance provides the average squared distance to the expected value
n
1 X
Var(X) = · (xi − E[X]))2
n
i=1
It describes how strongly the individual samples vary around the expected value.
(The squares are used such that positive and negative variation does not cancel.)
8 10
16 16
7 2
19 17
3
The expected average score (i.e. the expected value of the score function) for a player can
be computed as
Z
scoreavg = E[s(x, y)] = s(x, y) · p(x, y)dA.
(x,y)∈B
In other words, the scores are weighted with the probability of the corresponding
position and integrated.
The evaluation of definite integrals using random numbers is called Monte Carlo
integration.
Idea: Express the value of integral as expected value of a random variable, and
estimate it using samples.
b N
1X
Z
E[g(x)] = g(x)p(x) dx ≈ g(xi )
a N
i=1
The law of large numbers tells us that this estimate of the expected value is
unbiased, i.e. converges to the expected value.
By rewriting Approach:
• generate random numbers xi in
f (x) = g(x) · p(x)
[a, b] with arbitrary density p
we obtain the actually interesting
• estimate the integral by evaluating
expression
f (xi ) and averaging f (xi )/p(xi )
b N
1 X f (xi )
Z
f (x)dx ≈ • condition: p must be a probability
a N p(xi )
i=1 density function (PDF)
as an estimator of the integral of
f over [a, b].
0.7
0.6
0.5 f (x)
0.4
0.3
0.2
0.1
0.0
0.0 0.2 0.4 0.6 0.8 1.0
Uniform case (p = const): The area under the integral is approximated using a sum of
rectangles under the function, each of width 1/N and height corresponding to the
function value.
Computer Graphics – Path Tracing– Monte Carlo Integration 7–10
Monte Carlo Integration: Interpretation
interval. 0.6
0.5 f (xi )
Dividing the area into rectangles 0.4
1
of width N·p(x)
1
and height f (xi ) 0.3
N · p(xi )
gives an idea of how the density 0.2
samples. 0.0
0.0 0.2 0.4 0.6 0.8 1.0
If hIi is a Monte Carlo estimator for the integral I, then hIi is itself a random variable. One
can show that 2
1 b f (x)
Z
2
σhIi = Var(hIi) = − I p(x) dx.
N a p(x)
Hence, this standard deviation of hIi provides a statistical estimate of the approximation
error in terms of the number of samples.
1
σhIi ∼ √
N
Note: hIi is called unbiased since its expected value is the true value of I.
N
Ω
Let r1 and r2 be two uniformly distributed random numbers over [0, 1]. One can
show that with polar coordinates φ = 2πr1 and θ = arccos r2 ,
q
cx = cos φ sin θ = cos(2πr1 ) · 1 − (1 − r2 )2
q
cy = sin φ sin θ = sin(2πr1 ) · 1 − (1 − r2 )2
cz = cos θ = 1 − r2
The hemisphere must be rotated such that the z-axis is aligned with the surface
normal N.
Then
ω = cx x + cy y + cz N
mirror
traced recursively. glass
mirror
glass
diffuse surface
mirror
glass
diffuse surface
mirror
Reflected / refracted hits are traced
glass
recursively.
diffuse surface
mirror
Reflected / refracted hits are traced
glass
recursively.
mirror
Reflected / refracted hits are traced
glass
recursively.
mirror
Reflected / refracted hits are traced
glass
recursively.
This allows all light paths L(D|S)∗ E and all global illumination effects.
Problem:
Assuming N samples at each diffuse hit, we
have
emitter
mirror
• N3 rays for the third indirection,
glass
• ...
The number of rays grows exponentially with
diffuse surface
recursion depth.
Throughput is reduced at each indirection, but
more rays are used per recursion level.
mirror
glass
At emitters, either emission of
reflection is randomly sampled.
Paths end at maximum recursion
depth, or when leaving the scene. diffuse surface
Assume emission is sampled with probability pemission . Due to the Monte-Carlo principle
(average f (xi )/p(xi )), to remain unbiased the returned radiance must be multiplied by
pemission . Conversely, if the reflectance is sampled, it must be weighted by 1−pemission .
1 1
The estimator remains unbiased. Light sources are not handled specially, all objects can
be emitters and reflect at the same time.
The principle of local random sampling is very powerful. For example, it provides a much
better way to determine recursion depth than simply setting it to a fixed number.
Insight: All path lengths have equal probability in path space. Terminating paths at a
fixed recursion depth introduces bias, since all longer paths are not sampled at all.
Assume the path is terminated with probability pterminate . Then its contribution to the
overall average must be weighted by pterminate
1
. Otherwise, if the path is continued, it is
weighted by 1−pterminate .
1
computeImage() {
// set all pixels to (0,0,0)
clear(pixels);
Emitters can be arbitrarily large; this enables accurate reproduction of soft shadows.
The underlying reason is that light reaches the sensor from a small cone of directions
(instead of a single one). Larger aperture larger cone.
Computer Graphics – Path Tracing– Path Tracing – Idea 7–30
Depth of Field
This can be incorporated by integrating over the virtual aperture (cone of directions)
sample directions and average.
camera ray
pfocus
optical axis
focal distance
Practically: Sample ray origin in image plane in a circle of radius raperture around pixel
center. All rays must intersect the focal plane at the same point pfocus .
Computer Graphics – Path Tracing– Path Tracing – Idea 7–31
Motion Blur
Typical sensors do not expose instantaenously, but over a short time interval
Integrate radiance contributions over time interval by sampling in time.
1
We cannot change the convergence rate of Monte Carlo integration (O(N− 2 )),
however, the variance of the estimator hIi can be reduced.
In the left image, many rays hit the sky light, hence, variance is low. In the right image, the
point light is smaller and hit by fewer rays, hence variance is high.
r = 1.0
specular
random path
Naïve selection of path directions (uniform hemisphere samples) leads to many paths
with minimal throughput that never hit an emitter.
Options:
• Variance Reduction
• Importance Sampling
• Stratified Sampling
• ...
• Quasi Monte Carlo: uses deterministic sequences of samples (so-called low
discrepancy sequences)
Remember, only the variance of the estimator is improved, but not the convergence rate.
N samples are taken from the domain of f with probability p(x), which gives the estimator
N
1 X f (xi ) 1 f (x)
hIi = with variance Var(hIi) = Var
N p(xi ) N p(x)
i=1
N samples are taken from the domain of f with probability p(x), which gives the estimator
N
1 X f (xi ) 1 f (x)
hIi = with variance Var(hIi) = Var
N p(xi ) N p(x)
i=1
By clever choice of p(x) the variance of hIi can be reduced. (If we could choose
p(x) = f (x)/ f (x) dx, the variance would vanish completely!)
R
N samples are taken from the domain of f with probability p(x), which gives the estimator
N
1 X f (xi ) 1 f (x)
hIi = with variance Var(hIi) = Var
N p(xi ) N p(x)
i=1
By clever choice of p(x) the variance of hIi can be reduced. (If we could choose
p(x) = f (x)/ f (x) dx, the variance would vanish completely!)
R
Principle: Choose p(x) similar to f (x), such that samples can still be generated easily.
N samples are taken from the domain of f with probability p(x), which gives the estimator
N
1 X f (xi ) 1 f (x)
hIi = with variance Var(hIi) = Var
N p(xi ) N p(x)
i=1
By clever choice of p(x) the variance of hIi can be reduced. (If we could choose
p(x) = f (x)/ f (x) dx, the variance would vanish completely!)
R
Principle: Choose p(x) similar to f (x), such that samples can still be generated easily.
Path tracing: locally (at each hit), extend path along high-contribution directions more
often than along ones with low contribution.
uniform
5 spp
importance
5 spp
uniform
75 spp
importance
75 spp
Typical distributions:
Probability
U
Construction of samples: solve X = P−1 (U),
where U is uniformly distributed over [0, 1].
Needed:
• Integral of p(x) 1
X
• Inverse of P−1 (x)
P(X)
p(x)
X
P(X)
p(x)
X
P(X)
p(x)
X
P(X)
p(x)
X
Given
p(x) = (n + 1) xn , P(x) = xn+1
Given
p(x) = (n + 1) xn , P(x) = xn+1
Compute
1 1
xn+1 1
Z
xn dx = =
0 n+1 0 n+1
Given
p(x) = (n + 1) xn , P(x) = xn+1
Compute
1 1
xn+1 1
Z
xn dx = =
0 n+1 0 n+1
Then √
X = P−1 (U) =
n+1
X ∼ p(x) ⇒ U
Given
p(x) = (n + 1) xn , P(x) = xn+1
Compute
1 1
xn+1 1
Z
xn dx = =
0 n+1 0 n+1
Then √
X = P−1 (U) =
n+1
X ∼ p(x) ⇒ U
√
Sample generation: if ξ is uniformly distributed on [0, 1], then n+1
ξ is distributed
according to p(x) = (n + 1)xn .
Given
p(x) = (n + 1) xn , P(x) = xn+1
Compute
1 1
xn+1 1
Z
xn dx = =
0 n+1 0 n+1
Then √
X = P−1 (U) =
n+1
X ∼ p(x) ⇒ U
√
Sample generation: if ξ is uniformly distributed on [0, 1], then n+1
ξ is distributed
according to p(x) = (n + 1)xn .
In the following, ξ1 , ξ2 are independent, uniformly distributed random numbers on [0, 1].
Circle area:
Z 2π Z 1
A = r dr dθ = π
0 0
Circle area:
Z 2π Z 1
A = r dr dθ = π
0 0
Circle area:
Z 2π Z 1
A = r dr dθ = π
0 0
θ = 2πξ2 θ = 2πξ1
Algorithm: f (x)
y
• choose u1 and u2 uniformly distributed
• accept sample if u1 < f (u2 )
Accepted samples are distributed as f (x).
x
do {
x = 1 - 2 * random()
y = 1 - 2 * random()
} while (x*x + y*y > 1)
vec3 uniformlyRandomDirection() {
float u = random();
float v = random();
float z = 1.0 - 2.0 * u;
float r = sqrt(1.0 - z * z);
float angle = 2.0 * PI * v;
return vec3(r * cos(angle), r * sin(angle), z);
}
k k
Z
fr (x, ωi , ωr ) = d ⇒ Lr (x, ωr ) = d Li (x, ωi )cos θi dωi
π π Ω
k k
Z
fr (x, ωi , ωr ) = d ⇒ Lr (x, ωr ) = d Li (x, ωi )cos θi dωi
π π Ω
k k
Z
fr (x, ωi , ωr ) = d ⇒ Lr (x, ωr ) = d Li (x, ωi )cos θi dωi
π π Ω
k k
Z
fr (x, ωi , ωr ) = d ⇒ Lr (x, ωr ) = d Li (x, ωi )cos θi dωi
π π Ω
Convert (θ, φ) to local Cartesian coordinates and then to local orientation (cf. slide 16).
If samples are chosen in this way, the local estimator sample becomes
Lr (x, ωr ) = kd Li (x, ωi ).
k k
Z
fr (x, ωi , ωr ) = d ⇒ Lr (x, ωr ) = d Li (x, ωi )cos θi dωi
π π Ω
Convert (θ, φ) to local Cartesian coordinates and then to local orientation (cf. slide 16).
If samples are chosen in this way, the local estimator sample becomes
Lr (x, ωr ) = kd Li (x, ωi ).
circumference
√
θ = arcsin ξ1 cosine-weighted cosine-weighted uniform
(top view) (side view) (top view)
Projected to the unit disk, the cosine-weighted hemisphere points are uniformly
distributed.
More details: Philip Dutré’s Global Illumination Compendium and the Sampling Transformation Zoo.
(
0.9 Le (x) > 0
pemission =
0.0 Le (x) == 0
(
0.9 Le (x) > 0
pemission =
0.0 Le (x) == 0
The integration domain is subdivided into strata (sg. stratum); the integral is
estimated as the sum of per-stratum estimators.
Benefit: One can show that variance can be reduced. (It does not increase.)
• Many samples for paths with high energy; few paths with low energy
Macro Level
• Selection between emission / reflection
• Direct illumination
Micro Level
• Importance sampling of the BRDF to select beam directions
In principle, paths can be selected arbitrarily, but samples must cover the integration
domain and contributions must then be weighted with 1/probability.
• Unbiased
• Bidirectional path tracing
• Metropolis light transport
• Biased
• Noise filtering
• Adaptive sampling
• Irradiance caching
Trace eye path and light path and connect path vertices if they can see each other.
l2
l1
e1
l0
e0
Trace eye path and light path and connect path vertices if they can see each other.
l2
l1
e1
l0
e0
Trace eye path and light path and connect path vertices if they can see each other.
l2
l1
e1
l0
e0
Trace eye path and light path and connect path vertices if they can see each other.
l2
l1
e1
l0
e0
Trace eye path and light path and connect path vertices if they can see each other.
l2
l1
e1
l0
e0
Trace eye path and light path and connect path vertices if they can see each other.
l2
l1
e1
l0
e0
• Compute path starting from the eye (eye path) with vertices e0 , e1 , · · · , et
• Compute path starting from the light (light paths) with vertices l0 , l1 , · · · , ls
• If path vertices ei and lj can be connected (i.e. can see each other), then a
new path e0 , · · · , ei , lj , · · · , lm is found.
• Altogether there are t + s possible paths, over which intelligent averaging is
done.
Unbiased algorithm.
(NVIDIA)
Biased.
filtered
Computer Graphics – Path Tracing– Advanced Path Tracing 7–68
Illustration: Adaptive Sampling
Estimate variance and use more samples where variance is large. Biased.
unfiltered filtered
Pre-compute illumination (light paths) at sparse locations in the scene and interpolate
irradiance on diffuse hits.
Direct illumination (far left), indirect illumination (left), reconstructed image (right), indirect samples (far right).
Biased.
From here on out, we will pivot and start working to lay the foundations for
real-time rendering.