Control Keys

move to next slide (also Enter or Spacebar).
move to previous slide.
 d  enable/disable drawing on slides
 p  toggles between print and presentation view
CTRL  +  zoom in
CTRL  -  zoom out
CTRL  0  reset zoom

Slides can also be advanced by clicking on the left or right border of the slide.

Notation

Type Font Examples
Variables (scalars) italics $a, b, x, y$
Functions upright $\mathrm{f}, \mathrm{g}(x), \mathrm{max}(x)$
Vectors bold, elements row-wise $\mathbf{a}, \mathbf{b}= \begin{pmatrix}x\\y\end{pmatrix} = (x, y)^\top,$ $\mathbf{B}=(x, y, z)^\top$
Matrices Typewriter $\mathtt{A}, \mathtt{B}= \begin{bmatrix}a & b\\c & d\end{bmatrix}$
Sets calligraphic $\mathcal{A}, B=\{a, b\}, b \in \mathcal{B}$
Number systems, Coordinate spaces double-struck $\mathbb{N}, \mathbb{Z}, \mathbb{R}^2, \mathbb{R}^3$

List of Math Symbols

Symbol Meaning
$\Omega$ Solid angle
$\theta$ Polar angle in the spherical coordinate system
$\phi$ Azimuth angle in the spherical coordinate system
$\Phi$ Luminous flux
$I$ Luminous intensity
$E$ Illuminance
$L$ Luminance
$\mathrm{f}_r$ BRDF (Bidirectional Reflection Distribution Function)
$\mathrm{f}_d$ Diffuse part of the BRDF
$\mathrm{f}_s$ Specular part of the BRDF

List of Math Symbols

Symbol Meaning
$\mathbf{n}$ Surface normal
$\mathbf{v}$ Unit vector in view direction
$\mathbf{l}$ Unit vector in light direction
$\eta$ Refractive index
$F$ Fresnel reflectance
$\mathbf{h}$ Halfway vector between light and view direction
$(\dots)_+$ Ramp function
$\langle \mathbf{a}\cdot \mathbf{b}\rangle$ Scalar product

Image-based Lighting

env
  • With image-based lighting, the emitted luminance $L_i$ of the environment is specified by a spherical environment image
  • Even without taking occlusion into account, according to the rendering equation, for a surface point all incident luminances $L_i$ must be multiplied by the BRDF and summed up. However, this would be too time-consuming for real-time computation.
  • If a parameterized BRDF is given, e.g. Phong BRDF, the integrals can be pre-computed and saved in a parameterized way (Pre-Filtered Environment Map)
  • For example, with the Phong BRDF, the diffuse part could be parameterized by the normal direction and the specular part can be parameterized by the reflection direction and stored in a spherical environment image
  • Often the results for different Phong glossiness exponents $n_s$ are stored in the mipmap levels of a texture

Environment Lighting (Modified Phong BRDF)

  • Starting from the rendering equation, the modified Phong BRDF is inserted (both equations are known from Part 10, Chapter 1):
    $\begin{align}L_o(\mathbf{v}) &= L_e(\mathbf{v}) + \int\limits_\Omega \mathrm{f}_r(\mathbf{v}, \mathbf{l})\, \, L_i(\mathbf{l}) \cos(\theta) \, d\omega\\ &= L_e(\mathbf{v}) + \int\limits_\Omega \left(\rho_d \frac{1}{\pi} + \rho_s \frac{n_s+2}{2 \pi} \,\cos(\alpha)^{n_s} \right)\, \, L_i(\mathbf{l}) \cos(\theta) \, d\omega \end{align}$
  • First, let us consider the diffuse part:
    $\begin{align}L_{o,d}(\mathbf{v}) &= \int\limits_\Omega \rho_d \frac{1}{\pi} \, \,L_i(\mathbf{l}) \cos(\theta) \, d\omega = \rho_d \frac{1}{\pi} \int\limits_\Omega L_i(\mathbf{l}) \cos(\theta) \, d\omega\\ &= \rho_d \frac{1}{\pi} \int\limits_{0}^{2\pi}\, \int\limits_{0}^{\pi/2} L_i(\mathbf{l}) \cos(\theta) \sin(\theta) \, d\theta \, d\phi \end{align}$

Modified Phong BRDF (Diffuse Part)

  • The approximation of the integral with the Riemann sum gives:
    $\begin{align}L_{o,d}(\mathbf{v}) &= \rho_d \frac{1}{\pi} \int\limits_{0}^{2\pi}\, \int\limits_{0}^{\pi/2} L_i(\mathbf{l}) \cos(\theta) \sin(\theta) \, d\theta \, d\phi\\ &\approx \rho_d \frac{1}{\pi} \sum\limits_{1}^{K}\, \sum\limits_{1}^{J} L_i(\mathbf{l}) \cos(\theta) \sin(\theta) \underbrace{\Delta\theta}_{\frac{\pi/2}{J}} \, \underbrace{\Delta\phi}_{\frac{2\pi}{K}}\\ &= \rho_d \frac{\pi}{K\,J} \sum\limits_{1}^{K}\, \sum\limits_{1}^{J} L_i(\mathbf{l}) \cos(\theta) \sin(\theta) \end{align}$

Importance Sampling

  • The Riemann sum is usually not a very efficient solution because the function is sampled uniformly
  • Using the theory of importance sampling, we can also approximate any integral by a sum:
    $\int\limits_a^b \mathrm{f}(x) \,dx \approx \frac{1}{N} \sum\limits_{n=1}^{N} \frac{\mathrm{f}(x_n)}{\mathrm{p}(x_n)}$
    where $p(x)$ is some arbitrary probability density function (PDF) that must fulfill the condition:
    $\int\limits_a^b \mathrm{p}(x) \, dx = 1$
  • In theory, the best PDF (with the smallest variance) would be
    $\mathrm{p}(x) = \frac{\mathrm{f}(x)}{ \int_a^b \mathrm{f}(x)}$
    which means the PDF should follow the shape of the function (i.e., the sampling density should be higher if the function values are higher).

Modified Phong BRDF (Diffuse Part)

  • Thus, the approximation of the integral for the diffuse part of the Phong BRDF can be solved with:
    $\begin{align}L_{o,d}(\mathbf{v}) &= \rho_d \frac{1}{\pi} \int\limits_{0}^{2\pi}\, \int\limits_{0}^{\pi/2} \underbrace{L_i(\mathbf{l}) \cos(\theta) \sin(\theta)}_{\mathrm{f}(\theta_n, \phi_n)} \, d\theta \, d\phi\\ &\approx \rho_d \frac{1}{\pi} \frac{1}{N} \sum_{n=1}^{N} \frac{\mathrm{f}(\theta_n, \phi_n)}{\mathrm{p}(\theta_n, \phi_n)}\\ &= \rho_d \frac{1}{\pi} \frac{1}{N} \sum_{n=1}^{N} \frac{L_i(\mathbf{l}) \cos(\theta_n) \sin(\theta_n)}{\mathrm{p}(\theta_n, \phi_n)} \end{align}$

Sampling of a Hemisphere

  • The PDF $\mathrm{p}(\theta, \phi)$ $ can be chosen arbitrarily
  • In order to get the best possible result after few samples, the PDF should follow the shape of the function
  • But how can we generate sampling positions with a specific PDF?
    • In many programming languages ​​it is simple to generate evenly distributed samples
    • Therefore, for the sampling of a hemisphere, the tables on the next slides show formulas for computing azimuth angle $\phi$ and polar angle $\theta$ of a spherical coordinate system from two uniformly distributed random variables $u$ and $v$ in range [0.0, 1.0]
    • The corresponding mathematical derivations can be found here:
      Importance Sampling of a Hemisphere
    • Interactive demonstrator:
      Importance Sampling of a Hemisphere

Sampling of a Hemisphere

PDF / MappingTop viewSide view
Uniform sampling of polar angles
$\mathrm{p}(\theta, \phi) = \frac{1}{2\pi} \frac{1}{\pi/2}$
$\phi = 2 \pi \,u$
$\theta = \frac{\pi}{2}\, v$
(corresponds to the Riemann sum)
uniform_theta_phi_top uniform_theta_phi_front
Uniform sampling of a hemisphere
$\mathrm{p}(\theta, \phi) = \frac{1}{2\pi} \,\sin(\theta)$
$\phi = 2 \pi \,u$
$\theta = \arccos(1 - v)$
uniform_hemi_top uniform_hemi_front

Sampling of a Hemisphere

PDF / MappingTop viewSide view
Phong BRDF (diffuse part)
$\mathrm{p}(\theta, \phi) = \frac{1}{\pi} \, \cos(\theta) \,\sin(\theta)$
$\phi = 2 \pi \,u$
$\theta = \arcsin(\sqrt{v})$
cosine_hemi_top cosine_hemi_front
Phong BRDF (specualar part)
$\mathrm{p}(\theta, \phi) = \frac{n_s + 1}{2 \pi} \, \cos(\theta)^{n_s} \,\sin(\theta)$
$\phi = 2 \pi \,u$
$\theta = \arccos\left((1-v)^{\frac{1}{n_s+1}}\right)$
phong_n40_top phong_n40_front

Sampling of a Hemisphere

PDF / MappingTop viewSide view
Microfacet GGX distribution with
$r_p = 0.5$ und $\alpha = r_p^2$
$\mathrm{D}_{\tiny \mbox{GGX}}(\theta) = \frac{\alpha^2}{\pi \left(\cos^2(\theta) (\alpha^2-1)+1\right)^2}$
$\mathrm{p}(\theta, \phi) = \mathrm{D}_{\tiny \mbox{GGX}}(\theta)\cos(\theta)\sin(\theta)$
$\phi = 2 \pi \,u$
$\theta = \arccos\left(\sqrt{\frac{1 - v}{v (\alpha^2-1) + 1} }\right)$
ggx_050_top ggx_050_front
Microfacet GGX distribution with
$r_p = 0.25$
ggx_025_top ggx_025_front

Halton Sequence

Sample positions of the 2D Halton sequence
halten2d
$y = h_3$
$x = h_2$
  • To prevent repeated sampling at the same position, pseudo-random sequences (such as the Halton sequence) are often used in practice
  • Halton sequence (in several dimensions)
    ${\small \left(h_2(n), h_3(n), h_5(n), h_7(n), \dots, h_{p_i}(n)\right)^\top}$
    where $p_i$ is the $i$-th prime number and $h_r(n)$ is computed by mirroring the numerical value $n$ (to the base $r$) at the decimal point
  • The generated samples are evenly distributed
  • A sequence of arbitrary length is possible
  • Example:
    $h_2((26)_{10})= h_2((11010)_2) = (0.01011)_2 = 11/32$
    $h_3((19)_{10})= h_3((201)_3)= (0.102)_3=11/27$

Halton Sequence

Index $n$ Numerical value (Base 2) Mirrored $h_2(n)$
110.1 = 1/2 0.5
2100.01 = 1/4 0.25
3110.11 = 3/4 0.75
41000.001 = 1/8 0.125
51010.101 = 1/2 + 1/8 0.625
61100.011 = 1/4 + 1/8 0.375
71110.111 = 1/2 + 1/4 + 1/8 0.875
halten2

Halton Sequence

Index $n$ Numerical value (Base 3) Mirrored $h_3(n)$
110.1 = 1/3 0.333
220.2 = 2/3 0.666
3100.01 = 1/9 0.111
4110.11 = 1/3 + 1/9 0.444
5120.21 = 2/3 + 1/9 0.777
6200.02 = 2/9 0.222
7210.12 = 1/3 + 2/9 0.555
8220.22 = 2/3 + 2/9 0.888
halten2

Hammersley Sequence

  • If the number $N$ of the sample value is known in advance, the Hammersley sequence can also be used, for which the first dimension can be computed faster
  • Hammersley sequence (in several dimensions)
    $\left(\frac{n}{N}, h_2(n), h_3(n), h_5(n), h_7(n), \dots, h_{p_i}(n)\right)^\top$
hammersley2d
$y = h_2$
$x = \frac{n}{N}$

Modified Phong BRDF (Diffuse Part)

  • And now back to the diffuse part of the Phong BRDF:
    $\begin{align}L_{o,d}(\mathbf{v}) &= \rho_d \frac{1}{\pi} \int\limits_{0}^{2\pi}\, \int\limits_{0}^{\pi/2} \underbrace{L_i(\mathbf{l}) \cos(\theta) \sin(\theta)}_{\mathrm{f}(\theta, \phi)} \, d\theta \, d\phi \approx \rho_d \frac{1}{\pi} \frac{1}{N} \sum_{n=1}^{N} \frac{\mathrm{f}(\theta_n, \phi_n)}{\mathrm{p}(\theta_n, \phi_n)}\end{align}$
ibl_phong_diffuse
$\mathbf{v}$
$\mathbf{n}$
  • If the PDF is chosen to be
    $\mathrm{p}(\theta, \phi) = \frac{1}{\pi} \, \cos(\theta) \,\sin(\theta)$
    and because the diffuse part is independent from the viewing direction, we get:
    $\begin{align}L_{o,d}(\mathbf{v}) = L_{o,d}(\mathbf{n}) &\approx \rho_d \underbrace{\frac{1}{N} \sum\limits_{n=1}^{N} L_i(\mathbf{n}, \theta_n, \phi_n)}_{\tiny \mbox{vorberechneter Wert}}\end{align}$
    where $\phi_n = 2 \pi \,h_2(n)$ and $\theta_n = \arcsin\left(\sqrt{h_3(n)}\right)$
    and the sampled hemisphere must be aligned to the normal $\mathbf{n}$

Modified Phong BRDF (Specular Part)

  • The specular part of the Phong BRDF is:
    $\begin{align}L_{o,s}(\mathbf{v}) &= \int\limits_\Omega \left(\rho_s \frac{n_s+2}{2 \pi} \,\cos(\alpha)^{n_s} \right)\, \,L_i(\mathbf{l}) \cos(\theta) \, d\omega \end{align}$
phong_vs_ibl_phong
$\theta$
$\alpha$
$\mathbf{v}$
$-\mathbf{l}$
$\mathbf{r}$
$\mathbf{n}$
Directional light
$\theta$
$\alpha$
$\mathbf{v}$
$-\mathbf{l}$
$\mathbf{r}$
$\mathbf{n}$
$\alpha$
$\mathbf{r}_v$
Environment lighting

Modified Phong BRDF (Specular Part)

  • Problem: The integral depends on both the angle $\alpha$ and the angle $\theta$
  • Since only contributions arise around the reflected view direction vector $\mathbf{r}_v$, the angle $\theta$ between the direction of incoming light and the normal is assumed to be constant and is approximated with the angle between the surface normal $\mathbf{n}$ and $\mathbf{r}_v$ (corresponds to the mean direction of incoming light):
    $\begin{align}L_{o,s}(\mathbf{v}) &= \int\limits_\Omega \left(\rho_s \frac{n_s+2}{2 \pi} \,\cos(\alpha)^{n_s} \right)\, \,L_i(\mathbf{l}) \cos(\theta) \, d\omega\\ &\approx \cos(\bar{\theta})\,\rho_s \int\limits_\Omega \frac{n_s+2}{2 \pi} \,\cos(\alpha)^{n_s} \, \,L_i(\mathbf{l}) \, d\omega\\ &=\langle \mathbf{n}\cdot \mathbf{r}_v\rangle \,\rho_s \int\limits_\Omega \frac{n_s+2}{2 \pi} \,\cos(\alpha)^{n_s}\, \,L_i(\mathbf{l})\, d\omega \end{align}$

Modified Phong BRDF (Specular Part)

  • If the hemisphere is aligned to $\mathbf{r}_v$ for the integration, we get:
    $\begin{align}L_{o,s}(\mathbf{v}) &\approx \langle \mathbf{n}\cdot \mathbf{r}_v\rangle \,\rho_s \int\limits_{0}^{2\pi}\, \int\limits_{0}^{\pi/2} \frac{n_s+2}{2 \pi} \,\cos(\theta)^{n_s} \, \,L_i(\mathbf{l})\, \sin(\theta) d\theta \, d\phi \\ &\approx \langle \mathbf{n}\cdot \mathbf{r}_v\rangle \,\rho_s \frac{1}{N} \sum_{n=1}^{N} \frac{\mathrm{f}(\theta_n, \phi_n)}{\mathrm{p}(\theta_n, \phi_n)}\end{align}$
  • If now the PDF is chosen to be
    $\mathrm{p}(\theta, \phi) = \frac{n_s + 1}{2 \pi} \, \cos(\theta)^{n_s} \,\sin(\theta)$
    it follows
    $\begin{align}L_{o,s}(\mathbf{v}) &\approx \langle \mathbf{n}\cdot \mathbf{r}_v\rangle \rho_s \underbrace{\frac{1}{N} \frac{n_s+2}{n_s+1} \sum\limits_{n=1}^{N} L_i(\mathbf{r}_v, \theta_n, \phi_n)}_{\tiny \mbox{pre-computed value}} \end{align}$
    where $\phi_n = 2 \pi \,h_2(n)$ and $\theta_n = \arccos\left((1-h_3(n))^{\frac{1}{n_s+1}}\right)$

Summary: IBL with the Phong BRDF

  • In total, only two accesses to pre-computed textures are required in the shader:
    $\begin{align}L_{o}(\mathbf{v}) &= L_{o,d}(\mathbf{n}) + L_{o,s}(\mathbf{v})\\ &\approx \rho_d \,T_d(\mathbf{n}) + \langle \mathbf{n}\cdot \mathbf{r}_v\rangle \rho_s \,T_s(\mathbf{r}_v, n_s) \end{align}$
  • Pre-filtered environment texture of the diffuse part:
    $T_d(\mathbf{n}) = \frac{1}{N} \sum\limits_{n=1}^{N} L_i(\mathbf{n}, \theta_n, \phi_n)$
    mit $\phi_n = 2 \pi \,h_2(n)$ und $\theta_n = \arcsin\left(\sqrt{h_3(n)}\right)$
  • Pre-filtered environment texture of the specular part:
    $ T_s(\mathbf{r}_v, n_s) = \frac{1}{N} \frac{n_s+2}{n_s+1} \sum\limits_{n=1}^{N} L_i(\mathbf{r}_v, \theta_n, \phi_n)$
    with $\phi_n = 2 \pi \,h_2(n)$ and $\theta_n = \arccos\left((1-h_3(n))^{\frac{1}{n_s+1}}\right)$

Example: IBL with the Phong BRDF

envmap_lighting_input
Environment Map
envmap_lighting_diffuseg
Pre-filtered diffuse part
envmap_lighting_result
Result
envmap_lighting_specular
Pre-filtered specular part $n_s = 600$

Example: IBL with the Phong BRDF

Important sampling of the diffuse part:

#version 300 es
precision highp float;
out vec4 outColor;
in vec2 tc; // texture coordinate of the output image in range [0.0, 1.0]
uniform int samples; // number of samples
uniform float envMapLevel; // environment map level
uniform sampler2D envMapImage; // environment image

const float PI = 3.1415926535897932384626433832795;

vec2 directionToSphericalEnvmap(vec3 dir) {
  float s = 1.0 - mod(1.0 / (2.0*PI) * atan(dir.y, dir.x), 1.0);
  float t = 1.0 / (PI) * acos(-dir.z);
  return vec2(s, t);
}

mat3 getNormalSpace(in vec3 normal) {
   vec3 someVec = vec3(1.0, 0.0, 0.0);
   float dd = dot(someVec, normal);
   vec3 tangent = vec3(0.0, 1.0, 0.0);
   if(abs(dd) > 1e-8) {
     tangent = normalize(cross(someVec, normal));
   }
   vec3 bitangent = cross(normal, tangent);
   return mat3(tangent, bitangent, normal);
}

// from http://holger.dammertz.org/stuff/notes_HammersleyOnHemisphere.html
// Hacker's Delight, Henry S. Warren, 2001
float radicalInverse(uint bits) {
  bits = (bits << 16u) | (bits >> 16u);
  bits = ((bits & 0x55555555u) << 1u) | ((bits & 0xAAAAAAAAu) >> 1u);
  bits = ((bits & 0x33333333u) << 2u) | ((bits & 0xCCCCCCCCu) >> 2u);
  bits = ((bits & 0x0F0F0F0Fu) << 4u) | ((bits & 0xF0F0F0F0u) >> 4u);
  bits = ((bits & 0x00FF00FFu) << 8u) | ((bits & 0xFF00FF00u) >> 8u);
  return float(bits) * 2.3283064365386963e-10; // / 0x100000000
}

vec2 hammersley(uint n, uint N) {
  return vec2(float(n) / float(N), radicalInverse(n));
}

void main() {
  
  float thetaN = PI * (1.0 - tc.y);
  float phiN = 2.0 * PI * (1.0 - tc.x);
  vec3 normal = vec3(sin(thetaN) * cos(phiN), 
                     sin(thetaN) * sin(phiN), 
                     cos(thetaN));
  mat3 normalSpace = getNormalSpace(normal);

  vec3 result = vec3(0.0);

  uint N = uint(samples);
  
  float r = random2(tc);
  
  for(uint n = 1u; n <= N; n++) {
      vec2 p = hammersley(n, N);
      float theta = asin(sqrt(p.y));
      float phi = 2.0 * PI * p.x;
      vec3 pos = vec3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta));
      vec3 posGlob = normalSpace * pos;
      vec2 uv = directionToSphericalEnvmap(posGlob);
      vec3 luminance = textureLod(envMapImage, uv, envMapLevel).rgb;
      result +=  luminance;
  }
  result = result / float(samples);
  outColor.rgb = result;
  outColor.a = 1.0;
}

Example: IBL with the Phong BRDF

Important sampling of the specular part:

#version 300 es
precision highp float;
out vec4 outColor;
in vec2 tc; // texture coordinate of the output image in range [0.0, 1.0]
uniform int samples; // number of samples
uniform float shininess; // specular shininess exponent
uniform float envMapLevel; // environment map level
uniform sampler2D envMapImage; // environment image
const float PI = 3.1415926535897932384626433832795;

vec2 directionToSphericalEnvmap(vec3 dir) {
  float s = 1.0 - mod(1.0 / (2.0*PI) * atan(dir.y, dir.x), 1.0);
  float t = 1.0 / (PI) * acos(-dir.z);
  return vec2(s, t);
}

mat3 getNormalSpace(in vec3 normal) {
   vec3 someVec = vec3(1.0, 0.0, 0.0);
   float dd = dot(someVec, normal);
   vec3 tangent = vec3(0.0, 1.0, 0.0);
   if(abs(dd) > 1e-8) {
     tangent = normalize(cross(someVec, normal));
   }
   vec3 bitangent = cross(normal, tangent);
   return mat3(tangent, bitangent, normal);
}

// from http://holger.dammertz.org/stuff/notes_HammersleyOnHemisphere.html
// Hacker's Delight, Henry S. Warren, 2001
float radicalInverse(uint bits) {
  bits = (bits << 16u) | (bits >> 16u);
  bits = ((bits & 0x55555555u) << 1u) | ((bits & 0xAAAAAAAAu) >> 1u);
  bits = ((bits & 0x33333333u) << 2u) | ((bits & 0xCCCCCCCCu) >> 2u);
  bits = ((bits & 0x0F0F0F0Fu) << 4u) | ((bits & 0xF0F0F0F0u) >> 4u);
  bits = ((bits & 0x00FF00FFu) << 8u) | ((bits & 0xFF00FF00u) >> 8u);
  return float(bits) * 2.3283064365386963e-10; // / 0x100000000
}

vec2 hammersley(uint n, uint N) {
  return vec2(float(n) / float(N), radicalInverse(n));
}

void main() {
  
  float thetaN = PI * (1.0 - tc.y);
  float phiN = 2.0 * PI * (1.0 - tc.x);
  vec3 normal = vec3(sin(thetaN) * cos(phiN), 
                     sin(thetaN) * sin(phiN), 
                     cos(thetaN));
  mat3 normalSpace = getNormalSpace(normal);

  vec3 result = vec3(0.0);

  uint N = uint(samples);
  
  float r = random2(tc);
  
  for(uint n = 1u; n <= N; n++) {
      vec2 p = hammersley(n, N);
      float theta = acos(pow(1.0 - p.y, 1.0/(shininess + 1.0)));
      float phi = 2.0 * PI * p.x;
      vec3 pos = vec3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta));
      vec3 posGlob = normalSpace * pos;
      vec2 uv = directionToSphericalEnvmap(posGlob);
      vec3 luminance = textureLod(envMapImage, uv, envMapLevel).rgb;
      result +=  luminance;
  }
  result = result / float(samples) * (shininess + 2.0) / (shininess + 1.0);
  outColor.rgb = result;
  outColor.a = 1.0;
}

Example: IBL with the Phong BRDF

Vertex shader for the result image:

#version 300 es
precision highp float;
in vec3 position; // input vertex position from mesh
in vec2 texcoord; // input vertex texture coordinate from mesh
in vec3 normal;   // input vertex normal from mesh

uniform mat4 cameraLookAt; //camera look at matrix
uniform mat4 cameraProjection; //camera projection matrix
uniform mat4 meshTransform0; // mesh0 transformation
uniform mat4 meshTransform1; // mesh1 transformation
uniform mat4 meshTransform0TransposedInverse;
uniform mat4 meshTransform1TransposedInverse;

uniform int gsnMeshGroup;

out vec2 tc; // output texture coordinate of vertex
out vec3 wfn; // output fragment normal of vertex in world space
out vec3 vertPos; // output 3D position in world space

void main(){
  mat4 meshTransform;
  mat4 meshTransformTransposedInverse;
  
  if(gsnMeshGroup == 0) {  // transformation of background sphere
   meshTransform = meshTransform0;
   meshTransformTransposedInverse = meshTransform0TransposedInverse;
  } else { // transformation of mesh
   meshTransform = meshTransform1;
   meshTransformTransposedInverse = meshTransform1TransposedInverse;
  }
  tc = texcoord;
  wfn = vec3(meshTransformTransposedInverse * vec4(normal, 0.0));
  vec4 vertPos4 = meshTransform * vec4(position, 1.0);
  vertPos = vec3(vertPos4) / vertPos4.w;
  gl_Position = cameraProjection * cameraLookAt * vertPos4;
}

Example: IBL with the Phong BRDF

Fragment shader for the result image:

#version 300 es
precision highp float; 
precision highp int;
out vec4 outColor;

#define M_PI 3.1415926535897932384626433832795

in vec2 tc; // texture coordinate of pixel (interpolated)
in vec3 wfn; // fragment normal of pixel in world space (interpolated)
in vec3 vertPos; // fragment vertex position in world space (interpolated)

uniform sampler2D envmapBackground; // min_filter="LINEAR" mag_filter="LINEAR"
uniform bool showBackground; // defaultval="true"
uniform sampler2D envmapDiffuse; // min_filter="LINEAR" mag_filter="LINEAR"
uniform sampler2D envmapSpecular; // min_filter="LINEAR" mag_filter="LINEAR"
uniform float diffuseMix; // weighting factor of diffuse color
uniform vec4 diffuseColor; // diffuse color
uniform float specularMix; // weighting factor of specular color
uniform vec4 specularColor; // pecularColor
uniform vec3 cameraPos; // camera position in global coordinate system
uniform int gsnMeshGroup;

vec2 directionToSphericalEnvmap(vec3 dir) {
  float s = 1.0 - mod(1.0 / (2.0*M_PI) * atan(dir.y, dir.x), 1.0);
  float t = 1.0 / (M_PI) * acos(-dir.z);
  return vec2(s, t);
}
  
void main() {
  vec3 normal = normalize(wfn.xyz);
  vec3 viewDir = normalize(cameraPos - vertPos);
  
  vec3 rv = reflect(-viewDir, normal);
  
  if(gsnMeshGroup == 0) {
    if(showBackground) {
      // color of envmap sphere
      outColor.rgb = texture(envmapBackground, vec2(1.0-tc.x, tc.y)).rgb;
      outColor.a = 1.0;
    } else {
      discard;
    }
  } else {

    vec3 diff = texture(envmapDiffuse, directionToSphericalEnvmap(normal)).rgb;
    vec3 rd = diffuseMix * pow(diffuseColor.rgb, vec3(2.2));
    vec3 rs = specularMix * pow(specularColor.rgb, vec3(2.2));
    
    // shading front-facing
    vec3 color = rd * diff;
    float rn = dot(rv, normal);
    if(rn > 0.0) {
      vec3 spec = texture(envmapSpecular, directionToSphericalEnvmap(rv)).rgb;
      color += rs * rn * spec;
    }
    
    // shading back-facing
    if(dot(viewDir, normal) < -0.1) {
      color = 0.1 * rs;
    }

    outColor.rgb = pow(color, vec3(1.0/2.2));
    outColor.a = 1.0;
  }
}

GGX Microfacet BRDF

IBL with the GGX microfacet BRDF

  • The GGX microfacet BRDF (introduced in Part 10, Chapter 1) is used by current physically-based rendering (PBR) approaches [Disney 2012] [Unreal Engine 2013] [Frostbite 2014]
  • The specular part of the GGX microfacet BRDF is:
    $\begin{align}L_{o,s}(\mathbf{v}) &= \int\limits_\Omega \frac{\mathrm{F}(\mathbf{v},\mathbf{h})\,\mathrm{D}(\mathbf{h})\,\mathrm{G}(\mathbf{l},\mathbf{v},\mathbf{h})}{4\,\,\langle\mathbf{n} \cdot \mathbf{l}\rangle\,\,\langle\mathbf{n} \cdot \mathbf{v}\rangle} \,L_i(\mathbf{l}) \cos(\theta) \, d\omega \end{align}$
  • This is an integral over all incoming light directions $\mathbf{l}$
  • The normal distribution function $\mathrm{D}(\mathbf{h})$ of the microfacets is a function of the half-vector $\mathbf{h}$
    $\mathrm{D}_{\tiny \mbox{GGX}}(\mathbf{h}) = \frac{\alpha^2}{\pi \left(\langle\mathbf{n} \cdot \mathbf{h}\rangle^2 (\alpha^2-1)+1\right)^2}$
  • Since this distribution is given as an analytic function, the integral should be solved by importance sampling over the half-vector
  • Consequently, we need a mathematical expression for the relation between $\mathbf{l}$ and $\mathbf{h}$

IBL with the GGX microfacet BRDF

halfvector_reflect
$\mathbf{v}$
$-\mathbf{l}$
$\mathbf{h}$
$\mathbf{n}$
  • As can be seen from in the figure, $\mathbf{l}$ can be computed by reflecting $\mathbf{v}$ at $\mathbf{h}$
  • If follows:
    $\mathbf{l} = 2 \,(\mathbf{v}^\top \mathbf{h}) \, \mathbf{h} - \mathbf{v}$
  • The PDF for the important sampling of the half-vector $\mathbf{h}$ is chosen as follows
    $\mathrm{p}_\mathbf{h}(\mathbf{h}) = \mathrm{D}_{\tiny \mbox{GGX}}(\mathbf{h}) \, \cos(\theta_h) \sin(\theta_h)$
    thus, for the PDF of the light direction  $\mathbf{l}$ we get
    $\mathrm{p}_\mathbf{l}(\mathbf{l}) = \frac{\mathrm{D}_{\tiny \mbox{GGX}}(\mathbf{h}) \cos(\theta_h) \sin(\theta_h)}{4 \,\langle\mathbf{v} \cdot \mathbf{h}\rangle}$
    The factor $\frac{1}{4 \,\langle\mathbf{v} \cdot \mathbf{h}\rangle}$ is the determinant of the Jacobian matrix and results from the substitution of the variables (see [Walter 2005]).

IBL with the GGX microfacet BRDF

  • Importance sampling of the integral for the specular part:
    $\begin{align}L_{o,s}(\mathbf{v}) &= \int\limits_{0}^{2\pi}\, \int\limits_{0}^{\pi/2} \underbrace{\frac{\mathrm{F}(\mathbf{v},\mathbf{h})\,\mathrm{D}(\mathbf{h})\,\mathrm{G}(\mathbf{l},\mathbf{v},\mathbf{h})}{4\,\,\langle\mathbf{n} \cdot \mathbf{l}\rangle\,\,\langle\mathbf{n} \cdot \mathbf{v}\rangle} \,L_i(\mathbf{l}) \cos(\theta) \sin(\theta)}_{\mathrm{f}(\theta, \phi)} \, d\theta\,d\phi\\ &\approx \frac{1}{N} \sum_{n=1}^{N} \frac{\mathrm{f}(\theta_n, \phi_n)}{\mathrm{p}(\theta, \phi)}\\ &=\frac{1}{N} \sum_{n=1}^{N} \frac{\frac{\mathrm{F}(\mathbf{v},\mathbf{h})\,\mathrm{D}(\mathbf{h})\,\mathrm{G}(\mathbf{l},\mathbf{v},\mathbf{h})}{4\,\,\langle\mathbf{n} \cdot \mathbf{l}\rangle\,\,\langle\mathbf{n} \cdot \mathbf{v}\rangle} \,L_i(\mathbf{l}) \cos(\theta) \sin(\theta)}{\frac{\mathrm{D}(\mathbf{h}) \cos(\theta) \sin(\theta)}{4 \,\langle\mathbf{v} \cdot \mathbf{h}\rangle}}\\ &= \frac{1}{N} \sum_{n=1}^{N} \frac{\mathrm{F}(\mathbf{v},\mathbf{h})\,\mathrm{G}(\mathbf{l},\mathbf{v},\mathbf{h}) \,\langle\mathbf{v} \cdot \mathbf{h}\rangle}{\langle\mathbf{n} \cdot \mathbf{l}\rangle\,\,\langle\mathbf{n} \cdot \mathbf{v}\rangle} \,L_i(\mathbf{l})\\ \end{align}$

IBL with the GGX microfacet BRDF

  • The overall result is:
    $\begin{align}L_{o,s}(\mathbf{v}) &\approx \frac{1}{N} \sum_{n=1}^{N} \frac{\mathrm{F}(\mathbf{v},\mathbf{h})\,\mathrm{G}(\mathbf{l},\mathbf{v},\mathbf{h}) \,\langle\mathbf{v} \cdot \mathbf{h}\rangle}{\langle\mathbf{n} \cdot \mathbf{l}\rangle\,\,\langle\mathbf{n} \cdot \mathbf{v}\rangle} \,L_i(\mathbf{l})\\ \end{align}$
    with $\mathbf{h} = \begin{pmatrix}\sin(\theta_h) cos(\phi_h) \\ \sin(\theta_h) \sin(\phi_h)\\ \cos(\theta_h)\end{pmatrix}$  and   $\mathbf{l} = 2 \,(\mathbf{v}^\top \mathbf{h}) \, \mathbf{h} - \mathbf{v}$
    where $\phi_h = 2 \pi \,h_2(n)$ and $\theta_h = \arccos\left(\sqrt{\frac{1 - h_3(n)}{h_3(n) (\alpha^2-1) + 1} }\right)$

Example: IBL with the GGX microfacet BRDF

spheres-split-sum-reference

Example: IBL with the GGX microfacet BRDF

Fragment shader (vertex shader as before):

#version 300 es
precision highp float; 
precision highp int;
out vec4 outColor;

#define PI 3.1415926535897932384626433832795

in vec2 tc; // texture coordinate of pixel (interpolated)
in vec3 wfn; // fragment normal of pixel (interpolated)
in vec3 vertPos; // fragment vertex position (interpolated)

uniform sampler2D envMapImage; // environment images
uniform sampler2D baseColorTexture; // base color 
uniform sampler2D roughnessTexture; // roughness texture
uniform sampler2D metallicTexture; // metallic parameter
uniform sampler2D emissionTexture; // emission texture
uniform float reflectance; // Fresnel reflectance
uniform bool showBackground; 
uniform int samplesSpec; //number of samples
uniform float envLevelSpec; // level for envmap lookup
uniform int samplesDiff; // number of samples
uniform float envLevelDiff; // level for envmap lookup
uniform vec3 cameraPos; // camera position in global coordinate system
uniform int gsnMeshGroup;

vec2 directionToSphericalEnvmap(vec3 dir) {
  float s = 1.0 - mod(1.0 / (2.0*PI) * atan(dir.y, dir.x), 1.0);
  float t = 1.0 / (PI) * acos(-dir.z);
  return vec2(s, t);
}

mat3 getNormalSpace(in vec3 normal) {
   vec3 someVec = vec3(1.0, 0.0, 0.0);
   float dd = dot(someVec, normal);
   vec3 tangent = vec3(0.0, 1.0, 0.0);
   if(abs(dd) > 1e-8) {
     tangent = normalize(cross(someVec, normal));
   }
   vec3 bitangent = cross(normal, tangent);
   return mat3(tangent, bitangent, normal);
}

// from http://holger.dammertz.org/stuff/notes_HammersleyOnHemisphere.html
// Hacker's Delight, Henry S. Warren, 2001
float radicalInverse(uint bits) {
  bits = (bits << 16u) | (bits >> 16u);
  bits = ((bits & 0x55555555u) << 1u) | ((bits & 0xAAAAAAAAu) >> 1u);
  bits = ((bits & 0x33333333u) << 2u) | ((bits & 0xCCCCCCCCu) >> 2u);
  bits = ((bits & 0x0F0F0F0Fu) << 4u) | ((bits & 0xF0F0F0F0u) >> 4u);
  bits = ((bits & 0x00FF00FFu) << 8u) | ((bits & 0xFF00FF00u) >> 8u);
  return float(bits) * 2.3283064365386963e-10; // / 0x100000000
}

vec2 hammersley(uint n, uint N) {
  return vec2(float(n) / float(N), radicalInverse(n));
}

float random2(vec2 n) { 
	return fract(sin(dot(n, vec2(12.9898, 4.1414))) * 43758.5453);
}
  
float G1_GGX_Schlick(float NdotV, float roughness) {
  float r = roughness; // original
  //float r = 0.5 + 0.5 * roughness; // Disney remapping
  float k = (r * r) / 2.0;
  float denom = NdotV * (1.0 - k) + k;
  return NdotV / denom;
}

float G_Smith(float NoV, float NoL, float roughness) {
  float g1_l = G1_GGX_Schlick(NoL, roughness);
  float g1_v = G1_GGX_Schlick(NoV, roughness);
  return g1_l * g1_v;
}

vec3 fresnelSchlick(float cosTheta, vec3 F0) {
  return F0 + (1.0 - F0) * pow(1.0 - cosTheta, 5.0);
} 

// adapted from "Real Shading in Unreal Engine 4", Brian Karis, Epic Games
vec3 specularIBLReference(vec3 F0 , float roughness, vec3 N, vec3 V) {
  mat3 normalSpace = getNormalSpace(N);
  vec3 result = vec3(0.0);
  uint sampleCount = uint(samplesSpec);
  float r = random2(tc);
  for(uint n = 1u; n <= sampleCount; n++) {
    //vec2 p = hammersley(n, N);
    vec2 p = mod(hammersley(n, sampleCount) + r, 1.0);
    float a = roughness * roughness;
    float theta = acos(sqrt((1.0 - p.y) / (1.0 + (a * a - 1.0) * p.y)));
    float phi = 2.0 * PI * p.x;
    // sampled h direction in normal space
    vec3 Hn = vec3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta));
    // sampled h direction in world space
    vec3 H = normalSpace * Hn;
    vec3 L = 2.0 * dot(V, H) * H - V;

    // all required dot products
    float NoV = clamp(dot(N, V), 0.0, 1.0);
    float NoL = clamp(dot(N, L), 0.0, 1.0);
    float NoH = clamp(dot(N, H), 0.0, 1.0);
    float VoH = clamp(dot(V, H), 0.0, 1.0);
    if(NoL > 0.0 && NoH > 0.0 && NoV > 0.0 && VoH > 0.0) {
      // geometry term
      float G = G_Smith(NoV, NoL, roughness);

      // Fresnel term
      vec3 F = fresnelSchlick(VoH, F0);

      vec2 uv = directionToSphericalEnvmap(L);
      vec3 luminance = textureLod(envMapImage, uv, envLevelSpec).rgb;
      result += luminance * F * G * VoH / (NoH * NoV);
    }
  }
  result = result / float(sampleCount);
  return result;
}

vec3 diffuseIBLReference(vec3 normal) {
  mat3 normalSpace = getNormalSpace(normal);

  vec3 result = vec3(0.0);
  uint sampleCount = uint(samplesDiff);
  float r = random2(tc);
  for(uint n = 1u; n <= sampleCount; n++) {
    //vec2 p = hammersley(n, N);
    vec2 p = mod(hammersley(n, sampleCount) + r, 1.0);
    float theta = asin(sqrt(p.y));
    float phi = 2.0 * PI * p.x;
    vec3 pos = vec3(sin(theta) * cos(phi), 
                    sin(theta) * sin(phi), 
                    cos(theta));
    vec3 posGlob = normalSpace * pos;
    vec2 uv = directionToSphericalEnvmap(posGlob);
    vec3 luminance = textureLod(envMapImage, uv, envLevelDiff).rgb;
    result +=  luminance;
  }
  result = result / float(sampleCount);
  return result;
}

void main() {
  vec3 normal = normalize(wfn);
  vec3 viewDir = normalize(cameraPos - vertPos);
  
  if(gsnMeshGroup == 0) {
    if(showBackground) {
      // color of envmap sphere
      outColor.rgb = texture(envMapImage, vec2(1.0-tc.x, tc.y)).rgb;
      outColor.a = 1.0;
    } else {
      discard;
    }
  } else {

    vec3 baseColor = pow(texture(baseColorTexture, tc).rgb, vec3(2.2));
    vec3 emission = pow(texture(emissionTexture, tc).rgb, vec3(2.2));;
    float roughness = texture(roughnessTexture, tc).r;
    float metallic = texture(metallicTexture, tc).r;
    
    // F0 for dielectics in range [0.0, 0.16] 
    // default FO is (0.16 * 0.5^2) = 0.04
    vec3 f0 = vec3(0.16 * (reflectance * reflectance)); 
    // in case of metals, baseColor contains F0
    f0 = mix(f0, baseColor, metallic);
    
    // compute diffuse and specular factors
    vec3 F = fresnelSchlick(max(dot(normal, viewDir), 0.0), f0);
    vec3 kS = F;
    vec3 kD = 1.0 - kS;
    kD *= 1.0 - metallic;    
    
    vec3 specular = specularIBLReference(f0, roughness, normal, viewDir); 
    vec3 diffuse = diffuseIBLReference(normal);
    vec3 color = emission + kD * baseColor * diffuse + specular;
    outColor.rgb = pow(color, vec3(1.0/2.2));
    outColor.a = 1.0;
  }
}

Split-Sum-Approximation

  • Despite important sampling, the previous method is still too slow for real-time applications. Therefore, we needed a solution that can pre-compute as many parts of the solution as possible.
  • To this end, the split-sum approximation is proposed in [Unreal Engine 2013]
  • This approach divides the sum of the previous approach into two parts, which are stored in two pre-computed environment textures
    • (1) Pre-Filtered Environment Map $T_1$
    • (2) BRDF Integration Map $T_2$
    $\begin{align} &\frac{1}{N} \sum_{n=1}^{N} \frac{\mathrm{F}(\mathbf{v},\mathbf{h})\,\mathrm{G}(\mathbf{l},\mathbf{v},\mathbf{h}) \,\langle\mathbf{v} \cdot \mathbf{h}\rangle}{\langle\mathbf{n} \cdot \mathbf{l}\rangle\,\,\langle\mathbf{n} \cdot \mathbf{v}\rangle} \,L_i(\mathbf{l})\\ \approx &\underbrace{\frac{1}{N} \sum_{n=1}^{N} L_i(\mathbf{l})}_{T_1} \quad \cdot \quad \underbrace{\frac{1}{N} \sum_{n=1}^{N} \frac{\mathrm{F}(\mathbf{v},\mathbf{h})\,\mathrm{G}(\mathbf{l},\mathbf{v},\mathbf{h}) \,\langle\mathbf{v} \cdot \mathbf{h}\rangle}{\langle\mathbf{n} \cdot \mathbf{l}\rangle\,\,\langle\mathbf{n} \cdot \mathbf{v}\rangle} }_{T_2}\\ \end{align}$

Split-Sum-Approximation: (1) Pre-Filtered Envmap

  • The PDF for the important sampling of both parts (1 and 2) is the same as before, that is
    $\phi_h = 2 \pi \,h_2(n)$ und $\theta_h = \arccos\left(\sqrt{\frac{1 - h_3(n)}{h_3(n) (\alpha^2-1) + 1} }\right)$
  • The results for different values ​​for the roughness $\alpha = r_p^2$ with $r_p$ in range [0.0, 1.0] are stored in the mipmap levels of the texture:
    $r_p = \frac{\mbox{mipLevel}}{\mbox{mipCount}}$
  • The viewing direction is not known at the time of the pre-computation. Therefore, a perpendicular viewing direction is assumed, i.e. $\mathbf{n} = \mathbf{v} = \mathbf{r}$

Split-Sum-Approximation: (1) Pre-Filtered Envmap

// adapted from "Real Shading in Unreal Engine 4", Brian Karis, Epic Games
vec3 prefilterEnvMap(float roughness, vec3 R) {
  vec3 N = R;
  vec3 V = R;
  uint sampleCount = uint(samples);
  float r = random2(tc);
  mat3 normalSpace = getNormalSpace(N);
  float totalWeight = 0.0;
  vec3 result = vec3(0.0);
  for(uint n = 1u; n <= sampleCount; n++) {
    //vec2 p = hammersley(n, N);
    vec2 p = mod(hammersley(n, sampleCount) + r, 1.0);
    float a = roughness * roughness;
    float theta = acos(sqrt((1.0 - p.y) / (1.0 + (a * a - 1.0) * p.y)));
    float phi = 2.0 * PI * p.x;
    // sampled h direction in normal space
    vec3 Hn = vec3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta));
    // sampled h direction in world space
    vec3 H = normalSpace * Hn;
    vec3 L = 2.0 * dot(V, H) * H - V;
    
    float NoL = max(dot(N, L), 0.0);
    if(NoL > 0.0) {
      vec2 uv = directionToSphericalEnvmap(L);
      vec3 luminance = textureLod(envMapImage, uv, envMapLevel).rgb;
      result +=  luminance * NoL;
      totalWeight += NoL;
    }
  }
  result = result / totalWeight;
  return result;
}

Split-Sum-Approximation: (1) Pre-Filtered Envmap

prefiltered_envmp_ggx
$r_p=0.0$
$r_p=0.2$
$r_p=0.4$
$0.6$
$0.8$
$1.0$
Source: : HDR environment map from HDRI Haven, CC0

Split-Sum-Approximation: (2) BRDF Integration Map

  • By inserting the Schlick approximation for $\mathrm{F}$:
    $\begin{align} \mathrm{F}_{\tiny \mbox{Schlick}}(\mathbf{v}, \mathbf{h}) &= \mathrm{F}_0 + \left(1.0 − \mathrm{F}_0\right) \left(1.0 − \langle\mathbf{v} \cdot \mathbf{h}\rangle \right)^5\\ &= \mathrm{F}_0 + \left(1.0 − \langle\mathbf{v} \cdot \mathbf{h}\rangle \right)^5 - \mathrm{F}_0 \left(1.0 − \langle\mathbf{v} \cdot \mathbf{h}\rangle \right)^5\\ &= \mathrm{F}_0 \left(1.0 - \left(1.0 − \langle\mathbf{v} \cdot \mathbf{h}\rangle \right)^5\right)+ \left(1.0 − \langle\mathbf{v} \cdot \mathbf{h}\rangle \right)^5 \end{align}$
    $\mathrm{F}_0$ can be moved in front of the sum
    $\begin{align} T_2 =&\frac{1}{N} \sum_{n=1}^{N} \frac{\mathrm{F}(\mathbf{v},\mathbf{h})\,\mathrm{G}(\mathbf{l},\mathbf{v},\mathbf{h}) \,\langle\mathbf{v} \cdot \mathbf{h}\rangle}{\langle\mathbf{n} \cdot \mathbf{l}\rangle\,\,\langle\mathbf{n} \cdot \mathbf{v}\rangle}\\ =&\mathrm{F}_0 \underbrace{\frac{1}{N} \sum_{n=1}^{N} \frac{\mathrm{G}(\mathbf{l},\mathbf{v},\mathbf{h}) \,\langle\mathbf{v} \cdot \mathbf{h}\rangle}{\langle\mathbf{n} \cdot \mathbf{l}\rangle\,\,\langle\mathbf{n} \cdot \mathbf{v}\rangle} \left(1.0 - \left(1.0 − \langle\mathbf{v} \cdot \mathbf{h}\rangle \right)^5\right)}_{T_{2,r}}\\ &+ \underbrace{\frac{1}{N} \sum_{n=1}^{N} \frac{\mathrm{G}(\mathbf{l},\mathbf{v},\mathbf{h}) \,\langle\mathbf{v} \cdot \mathbf{h}\rangle}{\langle\mathbf{n} \cdot \mathbf{l}\rangle\,\,\langle\mathbf{n} \cdot \mathbf{v}\rangle} \left(1.0 − \langle\mathbf{v} \cdot \mathbf{h}\rangle \right)^5}_{T_{2,g}} \end{align}$

Split-Sum-Approximation: (2) BRDF Integration Map

  • The result of the pre-computation can be stored in the red and green channel of a texture, which is parameterized in x-direction by $\langle\mathbf{n} \cdot \mathbf{v}\rangle$ an in y-direction by the roughness $r_p$ (both in the range [0.0, 1.0])
integration_map_combined
$T_{2,r}$
$T_{2,g}$
combined in red and green channel
$\langle\mathbf{n} \cdot \mathbf{v}\rangle \, \longrightarrow$
$r_p \, \longrightarrow$

Split-Sum-Approximation: (2) BRDF Integration Map

// adapted from "Real Shading in Unreal Engine 4", Brian Karis, Epic Games
vec2 integrateBRDF(float roughness, float NoV) {
  vec3 V;
  V.x = sqrt(1.0 - NoV * NoV); // sin
  V.y = 0.0;
  V.z = NoV; // cos
  vec2 result = vec2(0.0);
  uint sampleCount = uint(samples);
  for(uint n = 1u; n <= sampleCount; n++) {
    vec2 p = hammersley(n, sampleCount);
    float a = roughness * roughness;
    float theta = acos(sqrt((1.0 - p.y) / (1.0 + (a * a - 1.0) * p.y)));
    float phi = 2.0 * PI * p.x;
    // sampled h direction in normal space
    vec3 H = vec3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta));
    vec3 L = 2.0 * dot(V, H) * H - V;

    // because N = vec3(0.0, 0.0, 1.0) follows
    float NoL = clamp(L.z, 0.0, 1.0);
    float NoH = clamp(H.z, 0.0, 1.0);
    float VoH = clamp(dot(V, H), 0.0, 1.0);
    if(NoL > 0.0) {
      float G = G_Smith(NoV, NoL, roughness);
      float G_Vis = G * VoH / (NoH * NoV);
      float Fc = pow(1.0 - VoH, 5.0);
      result.x += (1.0 - Fc) * G_Vis;
      result.y += Fc * G_Vis;
    }
  }
  result = result / float(sampleCount);
  return result;
}

Example: Split-Sum-Approximation

envmap_lighting_input

Example: Split-Sum-Approximation

Fragment shader (vertex shader as before):

#version 300 es
precision highp float; 
precision highp int;
out vec4 outColor;

#define PI 3.1415926535897932384626433832795

in vec2 tc; // texture coordinate of pixel (interpolated)
in vec3 wfn; // fragment normal of pixel (interpolated)
in vec3 vertPos; // fragment vertex position (interpolated)
uniform sampler2D envmapImage; 
uniform sampler2D prefilteredEnvmap; 
uniform sampler2D brdfIntegrationMap; 
uniform sampler2D diffuseMap; 
uniform sampler2D baseColorTexture; 
uniform sampler2D roughnessTexture; // roughness texture
uniform sampler2D metallicTexture; // metallic texture
uniform sampler2D emissionTexture; // emission texture"
uniform float reflectance; // Fresnel reflectance
uniform bool showBackground;
uniform vec3 cameraPos; // camera position in global coordinate system
uniform int mipCount; // number of usable mipmap levels
uniform int gsnMeshGroup;

vec2 directionToSphericalEnvmap(vec3 dir) {
  float s = 1.0 - mod(1.0 / (2.0*PI) * atan(dir.y, dir.x), 1.0);
  float t = 1.0 / (PI) * acos(-dir.z);
  return vec2(s, t);
}

// adapted from "Real Shading in Unreal Engine 4", Brian Karis, Epic Games
vec3 specularIBL(vec3 F0 , float roughness, vec3 N, vec3 V) {
  float NoV = clamp(dot(N, V), 0.0, 1.0);
  vec3 R = reflect(-V, N);
  vec2 uv = directionToSphericalEnvmap(R);
  vec3 prefilteredColor = textureLod(prefilteredEnvmap, uv, 
                                     roughness*float(mipCount)).rgb;
  vec4 brdfIntegration = texture(brdfIntegrationMap, vec2(NoV, roughness));
  return prefilteredColor * ( F0 * brdfIntegration.x + brdfIntegration.y );
}

vec3 diffuseIBL(vec3 normal) {
  vec2 uv = directionToSphericalEnvmap(normal);
  return texture(diffuseMap, uv).rgb;
}

vec3 fresnelSchlick(float cosTheta, vec3 F0) {
  return F0 + (1.0 - F0) * pow(1.0 - cosTheta, 5.0);
} 

void main() {
  vec3 normal = normalize(wfn);
  vec3 viewDir = normalize(cameraPos - vertPos);
  
  if(gsnMeshGroup == 0) {
    if(showBackground) {
      // color of envmap sphere
      outColor.rgb = texture(envmapImage, vec2(1.0-tc.x, tc.y)).rgb;
      outColor.a = 1.0;
    } else {
      discard;
    }
  } else {

    vec3 baseColor = pow(texture(baseColorTexture, tc).rgb, vec3(2.2));
    vec3 emission = pow(texture(emissionTexture, tc).rgb, vec3(2.2));;
    float roughness = texture(roughnessTexture, tc).r;
    float metallic = texture(metallicTexture, tc).r;
    
    // F0 for dielectics in range [0.0, 0.16] 
    // default FO is (0.16 * 0.5^2) = 0.04
    vec3 f0 = vec3(0.16 * (reflectance * reflectance)); 
    // in case of metals, baseColor contains F0
    f0 = mix(f0, baseColor, metallic);
    
    // compute diffuse and specular factors
    vec3 F = fresnelSchlick(max(dot(normal, viewDir), 0.0), f0);
    vec3 kS = F;
    vec3 kD = 1.0 - kS;
    kD *= 1.0 - metallic;    
    
    vec3 specular = specularIBL(f0, roughness, normal, viewDir); 
    vec3 diffuse = diffuseIBL(normal);
    
    vec3 color = emission + kD * baseColor * diffuse + specular;
    outColor.rgb = pow(color, vec3(1.0/2.2));
    outColor.a = 1.0;
  }
}

References

Are there any questions?

questions

Please notify me by e-mail if you have questions, suggestions for improvement, or found typos: Contact

More lecture slides

Slides in German (Folien auf Deutsch)