1.
In practice however, a linear depth buffer like this is almost never used. For correct projection properties a non-linear depth equation is used that is proportional to 1/z. What this basically does is give us enormous(庞大) precision when z is small and much less precision when z is far away. Think about this for a second: do we really want the depth values that are 1000 units away to have the same precision as highly-detailed objects at a distance of 1? The linear equation doesn’t take this into account.
Since the non-linear function is proportional to 1/z, z-values between 1.0 and 2.0 for example would result in depth values between 1.0 and 0.5 which is half of the precision a float provides us, giving us enormous(庞大) precision at small z-values. Z-values between 50.0 and 100.0 would account for only 2% of the float’s precision, this is exactly what we want. Such an equation, that also takes near and far distances into account, is given below:
Don’t worry if you don’t know exactly what is going on with this equation. The important thing to remember is that the values in the depth buffer are not linear in screen-space (they are linear in view-space before the projection matrix is applied). A value of 0.5 in the depth buffer does not mean the object’s z-values are halfway in the frustum; the z-value of the vertex is actually quite close to the near plane! You can see the non-linear relation between the z-value and the resulting depth buffer’s value in the following graph:
As you can see, the depth values are greatly determined by the small z-values thus giving us enormous depth precision to the objects close by. The equation to transform z-values (from the viewer’s perspective) is embedded within the projection matrix so when we transform vertex coordinates from view to clip and then to screen-space the non-linear equation is applied. If you’re curious as to what the projection matrix actually does in detail I suggest the following great article.
总结:
a. Depth buffer 保存[0,1] 之间的值,一个点 在 Near Plane 和 Far Plane 的中点,并不是说明 这个点对应 深度缓存 就是0.5,Depth buffer 保存的值,并不是 与 Near Plane 和 Far Plane 成线性关系的。
b. 因为点经过的投影变换,透视除法,其实点变换到 NDC中,保存的深度值,就与 1/z 成 正比(A * 1/z + B),所以,当A,B 都是1的时候,如果z值范围是 [1,2] 的话,那么对应的 深度值的范围就是 [1, 0.5],可以看到,对于 靠近 Near Plane 的z值,映射的深度值范围已经是 [1, 0.5],一半的范围,所以,在靠近Near plane 的 深度精度,更加大。
c. z 值 对应的 深度值,[Near,Far] -> [0, 1],注意,并不是线性的,而是
2. Transform the non-linear depth values of the fragment back to their linear siblings.
(把 非线性的 深度值 转为 线性的 深度值)
float depth = LinearizeDepth(gl_FragCoord.z) / far; // divide by far to get depth in range [0,1] for visualization purposes
FragColor = vec4(vec3(depth), 1.0);
float near = 0.1;
float far = 100.0;
float LinearizeDepth(float depth)
{
float z = depth * 2.0 - 1.0; // back to NDC
return (2.0 * near * far) / (far + near - z * (far - near));
//return (2.0 * near * far) / (z * (far - near) - far - near);
}
细节
看 LinearDepth 函数
1. 先把 depth 从 [0,1] -> [-1,1] :
float z = depth * 2.0 - 1.0; // back to NDC
2. 执行完 第一步之后,depth 已经处于 NDC 空间,执行
return (2.0 * near * far) / (far + near - z * (far - near));
之后,其实就说把 NDC 空间的 深度值,变换 到 相机空间。
具体的,要参考:
http://www.songho.ca/opengl/gl_projectionmatrix.html
里面有:
其实就是反向去求 Ze 的值,但是需要注意的是,求出来的 Ze 是位于 [-Near, -Far] 之间的,所以需要 反 一下,就可以得到 对于代码的公式。
3. LinearDepth 函数 得到的z 值 是 位于 [Near, Far] 中的,所以 最后需要 除以 far ,把值转换到 [0, 1] 之间。