Control Keys

move to next slide (also Enter or Spacebar).
move to previous slide.
 d  enable/disable drawing on slides
 p  toggles between print and presentation view
CTRL  +  zoom in
CTRL  -  zoom out
CTRL  0  reset zoom

Slides can also be advanced by clicking on the left or right border of the slide.

Notation

Type Font Examples
Variables (scalars) italics $a, b, x, y$
Functions upright $\mathrm{f}, \mathrm{g}(x), \mathrm{max}(x)$
Vectors bold, elements row-wise $\mathbf{a}, \mathbf{b}= \begin{pmatrix}x\\y\end{pmatrix} = (x, y)^\top,$ $\mathbf{B}=(x, y, z)^\top$
Matrices Typewriter $\mathtt{A}, \mathtt{B}= \begin{bmatrix}a & b\\c & d\end{bmatrix}$
Sets calligraphic $\mathcal{A}, B=\{a, b\}, b \in \mathcal{B}$
Number systems, Coordinate spaces double-struck $\mathbb{N}, \mathbb{Z}, \mathbb{R}^2, \mathbb{R}^3$

Vulkan Ray Tracing Pipeline

  • Vulkan is a programming interface for graphics applications
  • The Vulkan programming interface is standardized by the Khronos Group
  • At the end of 2020, an extension for ray tracing was defined for GLSL 4.6: GLSL_EXT_ray_tracing
  • This extension is used in the context of this lecture
  • The standard is supported by Nvidia Geforce RTX graphics cards and AMD graphics cards of the RX 6000 series

Vulkan Ray Tracing Emulator

  • Since not all students have access to a suitable graphics card with ray tracing accelerator hardware, a web-based emulator is provided:
  • The emulator translates the ray tracing shader code into a standard WebGL fragment shader that runs in the browser.
  • The emulator can also export a C++ Vulkan project

raytracing_emulator

Ray Tracing Shader Pipeline

raytracing_pipeline

The ray tracing pipeline consists of 5 different shaders:

  • Ray Generation
  • Closest-Hit
  • Miss
  • Intersection
  • Any-Hit

Ray Generation Shader

raytracing_pipeline
  • The ray generation shader creates rays and submits them to the "acceleration structure traversal" block by calling the function traceRayEXT(...)
  • The ray traversal block is the non-programmable part of the pipeline
  • The most important parameter of the traceRayEXT function is the payload variable that contains the collected information of the ray
  • The payload variable is of user-defined type and can be modified in the shaders stages that are called for a particular ray during its traversal
  • Once the ray traversal is completed, the traceRayEXT function returns to the caller and the payload variable can be evaluated in the ray generation shader to produce an output image

Intersection Shader

raytracing_pipeline
  • If the ray traversal detects an intersection of the ray with a user-defined bounding box (or triangle of a triangle mesh), the intersection shader is called
  • If the intersection shader determines that a ray-primitive intersection has occurred within the bounding box, it notifies the ray traversal with the function reportIntersectionEXT(...)
  • Furthermore, the intersection shader can write to a hitAttributeEXT variable (which can be of user-defined type).
  • For triangles, an intersection shader is already built-in
  • The built-in triangle intersection will provide barycentric coordinates of the hit location within the triangle with the "hitAttributeEXT vec2 baryCoord" variable
  • For geometric primitives that are not triangles (such as cubes, cylinders, spheres, parametric surfaces, etc.) you have to provide your custom intersection shader

Any-Hit Shader

raytracing_pipeline
  • If an intersection is reported and an any-hit shader is provided, the any-hit shader is called
  • The task of the any-hit shader is to accept or ignore a hit
  • A typical application for an any-hit shader is to handle a partly transparent surface. If the hit occurs in a transparent region, it should be ignored.
  • A hit is ignored with the ignoreIntersectionEXT statement
  • It is also possible to terminate the ray traversal in the any-hit shader with the terminateRayEXT statement
  • If no any-hit shader is provided or the ignoreIntersectionEXT statement is not called in the shader, the hit is reported to the ray traversal

Closest-Hit and Miss Shader

raytracing_pipeline
  • Once the ray traversal has determined all possibles hits along the ray and at least one hit has occurred, the closest-hit shader is called for the closest one of these hits
  • Otherwise, if no hit occurred, the miss shader is called
  • Both types of shaders can manipulate the ray payload
  • For example, the miss shader could submit the color of the environment into the payload and the closest-hit shader could compute the shading color for the hit surface
  • To this end, the closest-hit shader can access several built-in variables, such as the gl_PrimitiveID or the gl_InstanceID that are set accordingly for each hit

Closest-Hit and Miss Shader

raytracing_pipeline
  • The closest-hit and miss shader can also call the traceRayEXT function, which submits another ray into the ray traversal block and might create a recursion
  • A typical application in the closest-hit shader is sending a "shadow" ray in the direction of the light source to determine if the light is occluded by other objects
  • As this ray might trigger another call of the closest-hit shader, a recursion is created
  • In general, it is recommended to keep the number of recursive function calls as low as possible for best performance

Built-In Variables

Ray generation Closest-hit Miss Intersection Any-hit
uvec3 gl_LaunchIDEXT
uvec3 gl_LaunchSizeEXT
int gl_PrimitiveID
int gl_InstanceID
int gl_InstanceCustomIndexEXT
int gl_GeometryIndexEXT
vec3 gl_WorldRayOriginEXT
vec3 gl_WorldRayDirectionEXT
vec3 gl_ObjectRayOriginEXT
vec3 gl_ObjectRayDirectionEXT
float gl_RayTminEXT
float gl_RayTmaxEXT
uint gl_IncomingRayFlagsEXT
float gl_HitTEXT
uint gl_HitKindEXT
mat4x3 gl_ObjectToWorldEXT
mat4x3 gl_WorldToObjectEXT

Built-In Constants

const uint gl_RayFlagsNoneEXT = 0u;
const uint gl_RayFlagsNoOpaqueEXT = 2u;
const uint gl_RayFlagsTerminateOnFirstHitEXT = 4u;
const uint gl_RayFlagsSkipClosestHitShaderEXT = 8u;
const uint gl_RayFlagsCullBackFacingTrianglesEXT = 16u;
const uint gl_RayFlagsCullFrontFacingTrianglesEXT = 32u;
const uint gl_RayFlagsCullOpaqueEXT = 64u;
const uint gl_RayFlagsCullNoOpaqueEXT = 128u;
const uint gl_HitKindFrontFacingTriangleEXT = 0xFEu;
const uint gl_HitKindBackFacingTriangleEXT = 0xFFu;

TLAS and BLAS

tlas_blas
TLAS
BLAS
gl_ObjectToWorldEXT
gl_WorldToObjectEXT
gl_InstanceID = 0
gl_InstanceID = 1
gl_InstanceID = 2
Triangle Mesh
Intersection Boxes
Intersection Boxes

TLAS and BLAS

  • The 3D scene is represented by the acceleration structure that is used in the acceleration structure traversal block
  • The acceleration structure consists of a top-level acceleration structure (TLAS) and multiple bottom-level acceleration structures (BLAS)
  • Each BLAS can be either a triangle mesh or a user-defined collection of axis-aligned bounding boxes (AABBs)
  • A BLAS can be instantiated in the TLAS and gets a unique gl_InstanceID
  • Furthermore, each triangle in a triangle mesh and each AABB in the collection of intersection boxes gets a consecutive gl_PrimitiveID
  • Each BLAS has its transformation from object to world space, which is initially assigned when the BLAS is added to the TLAS
  • This transformation is accessible in the shader via the gl_ObjectToWorldEXT and gl_WorldToObjectEXT variables
  • When a hit occurs in the ray traversal and the intersection, any-hit, or closest-hit shader is called, these variables are set accordingly

Simple Ray Generation Shader Examples

Rays in 3D Space

ray
Origin
Direction
$\mathbf{s}$
$\mathbf{b}$
$\mathbf{r}$
  • A ray in 3D space can be defined by its
    • origin $\mathbf{s}=(s_x, s_y, s_z)^\top$ and
    • direction vector $\mathbf{r}=(r_x, r_y, r_z)^\top$
  • The length of the direction vector is typically normalized to 1
  • For example, to generate a ray starting from the origin $\mathbf{s}$ in the direction of a second point $\mathbf{b}$, the following applies:
    ray parameterization
    $\mathbf{s}$
    $\mathbf{p}(t)$
    $t$
    $\mathbf{r}$
    $\begin{aligned} \mathbf{r} &= \frac{\mathbf{b} - \mathbf{s}}{|\mathbf{b} - \mathbf{s}|} \end{aligned}$
  • Every point $\mathbf{p}$ on the ray can be parameterized by a scalar $t$:
    $\mathbf{p}(t) = \mathbf{s} + t \,\, \mathbf{r} \quad \forall\, \, t \ge\,\, 0$

Perspective Camera

perspective_camera
-1
1
center of projection $\mathbf{c}$
image plane
focal length
$f$
$y$
$z$
$x$
$y$
$z$
$\mathbf{c}$
$\Theta$
$w$
$h$
$\mathbf{p}$
$\mathbf{p}$
  • In the following, we discuss how to compute a ray through a pixel coordinate of a perspective camera
  • With a perspective camera, all rays start in the center of projection $\mathbf{c}$
  • So the origin $\mathbf{s}$ of the camera rays is already defined:
    $\mathbf{s}= \mathbf{c} = (0.0, 0.0, 0.0)^\top$

Perspective Camera

perspective_camera
-1
1
center of projection $\mathbf{c}$
image plane
focal length
$f$
$y$
$z$
$x$
$y$
$z$
$\mathbf{c}$
$\Theta$
$w$
$h$
$\mathbf{p}$
$\mathbf{p}$
  • Equivalent to the definition in OpenGL, the camera is oriented in the direction of the negative $z$ axis and the image plane in y-direction is in range $[-1.0, 1.0]$
  • With a given opening angle $\Theta$ in y-direction, the focal length is calculated as (see figure):

    $\frac{f}{1} = \frac{\cos( 0.5 \, \Theta)}{\sin( 0.5 \, \Theta)} \Leftrightarrow f = \mathrm{cotan}( 0.5 \, \Theta)$

Perspective Camera

perspective_camera
-1
1
center of projection $\mathbf{c}$
image plane
focal length
$f$
$y$
$z$
$x$
$y$
$z$
$\mathbf{c}$
$\Theta$
$w$
$h$
$\mathbf{p}$
$\mathbf{p}$
  • Consequently, the 3D position of a point on the image plane is:
    $\mathbf{p} = (p_x, p_y, -f)^\top$
    and the direction vector of the camera ray is:
    $\mathbf{r} = \frac{\mathbf{p} - \mathbf{c}}{|\mathbf{p} - \mathbf{c}|} = \frac{\mathbf{p}}{|\mathbf{p}|} $

Perspective Camera

img_coord
Pixel grid
Pixel area
Outer edge of the pixel grid
Pixel coordinates
Texture coordinates
Normalized device coordinates
$x$
$y$
  • Conversion between pixel coordinates $(x_p, y_p)$ and texture coordinates $(x_t, y_t)$ for an image width $w$ and image height $h$ in pixels:
    $\begin{pmatrix}x_t\\y_t\end{pmatrix} = \begin{pmatrix}\frac{x_p + 0.5}{w}\\ \frac{y_p + 0.5}{h}\end{pmatrix}$

Perspective Camera

img_coord
Pixel grid
Pixel area
Outer edge of the pixel grid
Pixel coordinates
Texture coordinates
Normalized device coordinates
$x$
$y$
  • Conversion between texture coordinates $(x_t, y_t)$ and normalized device coordinates $(x_n, y_n)$:
    $\begin{pmatrix}x_n\\y_n\end{pmatrix} = \begin{pmatrix}2.0 \,x_t - 1.0\\ 2.0 \,y_t - 1.0\end{pmatrix}$

Perspective Camera

img_coord
Pixel grid
Pixel area
Outer edge of the pixel grid
Pixel coordinates
Texture coordinates
Normalized device coordinates
$x$
$y$
  • Conversion between normalized device coordinates $(x_n, y_n)$ and the 3D position of the point $\mathbf{p}$:
    $\mathbf{p} = \begin{pmatrix}p_x\\p_y\\p_z\end{pmatrix} = \begin{pmatrix}\frac{w}{h} \, x_n\\ y_n\\ -f\end{pmatrix}$

Perspective Camera - Ray Generation Shader

// Returns a camera ray for a camera at the origin that is 
// looking in negative z-direction. "fieldOfViewY" must be given in degrees.
// "point" must be in range [0.0, 1.0] to cover the complete image plane.
//
vec3 getCameraRay(float fieldOfViewY, float aspectRatio, vec2 point) {
  // compute focal length from given field-of-view
  float focalLength = 1.0 / tan(0.5 * fieldOfViewY * 3.14159265359 / 180.0);
  // compute position in the camera's image plane in range [-1.0, 1.0]
  vec2 pos = 2.0 * (point - 0.5);
  return normalize(vec3(pos.x * aspectRatio, pos.y, -focalLength));
}

void main() { /**** RAY GENERATION SHADER ****/

  // compute the texture coordinate for the output image in range [0.0, 1.0]
  vec2 texCoord = (vec2(gl_LaunchIDEXT.xy) + 0.5) / vec2(gl_LaunchSizeEXT.xy);

  // camera's aspect ratio
  float aspect = float(gl_LaunchSizeEXT.x) / float(gl_LaunchSizeEXT.y);
  
  vec3 rayOrigin = vec3(0.0, 0.0, 0.0);
  vec3 rayDirection = getCameraRay(45.0, aspect, texCoord);
  ...
 

Perspective Camera - Ray Generation Shader

example_cameraray

Rendering a Triangle Mesh

trimesh raytracing_pipeline2
  • We now add a closest-hit and a miss shader to render our first triangle mesh
  • The camera rays from the previous example are submitted to the "acceleration structure traversal" by calling the built-in traceRayEXT(...) function in the ray generation shader
  • If a camera ray hits the triangle mesh, the closest-hit shader is called and the payload.color variable is set to red. Otherwise, if no hits occur, the miss shader is called and the payload.color variable is set to black.
  • After triggering the execution of the closest-hit or the miss shader, the traceRayEXT function returns to the calling ray generation shader and the modified payload.color variable is written to the output image
  • Example: Triangle Mesh

Accessing the Vertex Data of a Triangle Mesh

vertexdata raytracing_pipeline2
  • When the closest-hit shader is called, the built-in variables gl_InstanceID, gl_ObjectToWorldEXT, and gl_PrimitiveID are set accordingly and allow to identify the BLAS instance, its transformation, and the primitive (i.e., in this case, the triangle) of the closest hit location
  • The emulator provides three functions that take the gl_InstanceID and gl_PrimitiveID variables as input parameters and return the local vertex positions, local normals, and texture coordinates for the three vertices of the triangle that were hit
  • Furthermore, the barycentric coordinates of the closest hit location are computed by the built-in triangle intersection and are passed to the closest-hit shader via the vec2 baryCoord variable
  • Example: Vertex Data

Refresher: Scalar Product

scalarproduct
$\mathbf{a}$
$\mathbf{b}$
$\alpha$
$\mathbf{a}_{\mathbf{b}}$
  • The scalar product (aka dot product) combines two vectors $\mathbf{a}$ and $\mathbf{b} \in \mathbb{R}^N$. Result is a scalar.
    Scalar product: $\langle \mathbf{a} \cdot \mathbf{b}\rangle = \mathbf{a}^\top \mathbf{b} = \sum\limits_{i=1}^N a_i b_i$
    Scalar product in $\mathbb{R}^3$: $\mathbf{a}^\top \mathbf{b} = (a_1, a_2, a_3) \begin{pmatrix}b_1\\b_2 \\ b_3 \end{pmatrix} = a_1 b_1 + a_2 b_2 + a_3 b_3$
  • Commutative: $\langle \mathbf{a} \cdot \mathbf{b}\rangle = \langle \mathbf{b} \cdot \mathbf{a}\rangle$
  • Cosine law: $\langle \mathbf{a} \cdot\mathbf{b}\rangle = |\mathbf{a}| |\mathbf{b}| \cos \alpha$
  • Orthogonal vectors: $\mathbf{a} \perp \mathbf{b} \rightarrow \langle \mathbf{a} \cdot \mathbf{b}\rangle = 0$
  • Vertical projection: $\mathbf{a}_{\mathbf{b}} = (\mathbf{a}^\top \frac{\mathbf{b}}{|\mathbf{b}| }) \frac{\mathbf{b}}{|\mathbf{b}| }$
  • Relation to matrix multiplication:
    $\begin{bmatrix}a_{11} & a_{12}\\a_{21} & a_{22}\end{bmatrix}\begin{bmatrix}b_{11} & b_{12}\\b_{21} & b_{22}\end{bmatrix} = \begin{bmatrix}\mathbf{a}_1^\top\\ \mathbf{a}_2^\top\end{bmatrix}\begin{bmatrix}\mathbf{b}_1 & \mathbf{b}_2\end{bmatrix} = \begin{bmatrix}\mathbf{a}_1^\top\mathbf{b}_1 & \mathbf{a}_1^\top\mathbf{b}_2\\ \mathbf{a}_2^\top\mathbf{b}_1 & \mathbf{a}_2^\top\mathbf{b}_2 \end{bmatrix}$

Refresher: Cross Product

scalarproduct
$\mathbf{a}$
$\mathbf{b}$
$\alpha$
$\mathbf{a} \times \mathbf{b}$
$|\mathbf{a} \times \mathbf{b}|$
  • The cross product combines two vectors $\mathbf{a}$ and $\mathbf{b} \in \mathbb{R}^3$.
    The result is a new vector $\mathbf{c} = \mathbf{a} \times \mathbf{b} \in \mathbb{R}^3$.
  • Cross product:
    $\begin{pmatrix}a_1\\a_2 \\ a_3 \end{pmatrix} \times \begin{pmatrix}b_1\\b_2 \\ b_3 \end{pmatrix} = \begin{pmatrix}a_2 b_3 - a_3 b_2\\ a_3 b_1 - a_1 b_3 \\ a_1 b_2 - a_2 b_1 \end{pmatrix}$
  • Matrix notation:
    $\mathbf{a} \times \mathbf{b} = \begin{bmatrix}0 & -a_3 & a_2 \\a_3 & 0 & -a_1 \\ -a_2 & a_1 & 0 \end{bmatrix} \mathbf{b} = [\mathbf{a}]_\times \,\mathbf{b}$
  • Antisymmetry: $\mathbf{a} \times \mathbf{b} = -\mathbf{b} \times \mathbf{a}$
  • Sine law: $|\mathbf{a} \times \mathbf{b}| = |\mathbf{a}| |\mathbf{b}| \sin \alpha$
  • Orthogonal vectors: $\mathbf{a} \perp (\mathbf{a} \times \mathbf{b}) \perp \mathbf{b}$

Moving the Camera

lookat
eye point $\mathbf{c}_{\mathrm{\small eye}}$
reference point $\mathbf{p}_{\mathrm{\small ref}}$
up vector $\mathbf{v}_{\mathrm{\small up}}$
$\tilde{\mathbf{c}}_x$
$\tilde{\mathbf{c}}_y$
$\tilde{\mathbf{c}}_z$
  • By defining
    • eye point  $\mathbf{c}_{\mathrm{\small eye}}$
    • targeted reference point $\mathbf{p}_{\mathrm{\small ref}}$
    • up vector $\mathbf{v}_{\mathrm{\small up}}$ (which defines in which direction the $y$-coordinate of the camera is pointing)
    the basis vectors of the camera coordinate system are given by:
    $\begin{align} \mathbf{d} & = \mathbf{c}_{\mathrm{\small eye}} - \mathbf{p}_{\mathrm{\small ref}}\\ \tilde{\mathbf{c}}_z &= \frac{\mathbf{d}}{|\mathbf{d}|} \\ \mathbf{v}' &= \frac{\mathbf{v}_{\mathrm{\small up}}}{|\mathbf{v}_{\mathrm{\small up}}|} \\ \tilde{\mathbf{c}}_x &= \mathbf{v}'\times \tilde{\mathbf{c}}_z \\ \tilde{\mathbf{c}}_y &= \tilde{\mathbf{c}}_z \times \tilde{\mathbf{c}}_x\\ \end{align}$

Moving the Camera

  • For the camera at the origin, the camera ray is calculated with:
    $\begin{align}\mathbf{p} &= \begin{pmatrix}p_x\\p_y\\p_z\end{pmatrix} = \begin{pmatrix}\frac{w}{h} \, x_n\\ y_n\\ -f\end{pmatrix}\\ \mathbf{r} &= \frac{\mathbf{p}}{|\mathbf{p}|}\quad \quad \mathbf{s}= (0.0, 0.0, 0.0)^\top\end{align} $
  • Consequently, the camera ray in the transformed coordinate system is given by:
    $\begin{align}\tilde{\mathbf{p}} &= \tilde{\mathbf{c}}_x \, p_x + \tilde{\mathbf{c}}_y \, p_y + \tilde{\mathbf{c}}_z \, p_z\\ \tilde{\mathbf{r}} &= \frac{\tilde{\mathbf{p}}}{|\tilde{\mathbf{p}}|} \quad \quad \tilde{\mathbf{s}}= \mathbf{c}_{\mathrm{\small eye}}\end{align}$
  • Example: Camera Orbit

Shooting a Shadow Ray

raytracing_pipeline3
  • The closest-hit and the miss shader can also call the traceRayEXT function to submit a ray into the acceleration structure traversal
  • In this example, the closest-hit shader calls traceRayEXT to send a shadow ray in the direction of the light source.
shadow_ray
Shadow ray
Primary ray

Shooting a Shadow Ray

shadow_ray
Shadow ray
Primary ray
  • The shadow ray's task is to inform the emitting closest-hit shader whether or not there is any object between the hit point and the light source
  • If a hit occurs on the ray path towards the light, we are not interested in calling the closest-hit shader for that hit. Therefore, we can set the ray flags to gl_RayFlagsSkipClosestHitShaderEXT
  • Furthermore, we can save computation in the ray traversal because we do not need to find the closest hit. The ray traversal can already terminate on the first hit, which is why we additionally set the gl_RayFlagsTerminateOnFirstHitEXT flag.
  • If no hit occurs for the shadow ray, the miss shader is called and it sets the payload.shadowRayMiss variable from false to true
  • When the traceRayEXT function returns to the emitting closest-hit shader, this variable can be checked to determine if the surface point is in shadow or not

Example: Shooting a Shadow Ray

example_shadowray

Example: Shooting a Shadow Ray

struct RayPayloadType {
  vec3 color;
  bool shadowRayMiss;
}; // type of the "payload" variable

...

void  main() { /**** CLOSEST-HIT SHADER ****/
  
  // get mesh vertex data in object space
  vec3 p0, p1, p2;
  gsnGetPositions(gl_InstanceID, gl_PrimitiveID, p0, p1, p2);
  vec3 n0, n1, n2;
  gsnGetNormals(gl_InstanceID, gl_PrimitiveID, n0, n1, n2);
  vec2 t0, t1, t2;
  gsnGetTexCoords(gl_InstanceID, gl_PrimitiveID, t0, t1, t2);

  // interpolate with barycentric coordinate
  vec3 barys = vec3(1.0f - baryCoord.x - baryCoord.y, baryCoord.x, baryCoord.y);
  vec3 localNormal = normalize(n0 * barys.x + n1 * barys.y + n2 * barys.z);
  vec3 localPosition = p0 * barys.x + p1 * barys.y + p2 * barys.z;
  vec2 texCoords = t0 * barys.x + t1 * barys.y + t2 * barys.z;

  // transform to world space
  mat3 normalMat;
  gsnGetNormal3x3Matrix(gl_InstanceID, normalMat);
  vec3 normal = normalize(normalMat * localNormal);
  vec3 position = gl_ObjectToWorldEXT * vec4(localPosition, 1.0);
  
  // dynamic light location
  float t = float(frameID % 45)/float(45);
  vec3 lightPos = vec3(5.0 * sin(2.0*PI*t), 5.0 * cos(2.0*PI*t),  5.0);
  vec3 lightDir = normalize(lightPos - position);
  
  // prepare shadow ray
  uint rayFlags = gl_RayFlagsTerminateOnFirstHitEXT |
                                gl_RayFlagsSkipClosestHitShaderEXT;
  float rayMin     = 0.001;
  float rayMax     = length(lightPos - position);  
  float shadowBias = 0.001;
  uint cullMask = 0xFFu;
  float frontFacing = dot(-gl_WorldRayDirectionEXT, normal);
  vec3 shadowRayOrigin = position + sign(frontFacing) * shadowBias * normal;
  vec3 shadowRayDirection = lightDir;
  payload.shadowRayMiss = false;

  // shot shadow ray
  traceRayEXT(topLevelAS, rayFlags, cullMask, 0u, 0u, 0u, 
         shadowRayOrigin, rayMin, shadowRayDirection, rayMax, 0);
  
  // diffuse shading
  vec3 radiance = ambientColor; // ambient term
  if(payload.shadowRayMiss) { // if not in shadow
    float irradiance = max(dot(lightDir, normal), 0.0);
    if(irradiance > 0.0) { // if receives light
      radiance += baseColor * irradiance; // diffuse shading
    }
  }  
  
  payload.color = vec3(radiance);
}

void main() { /**** MISS SHADER ****/
  // set color to black
  payload.color = vec3(0.0, 0.0, 0.0);
  // shadow ray has not hit an object
  payload.shadowRayMiss = true;
}

Reflections

reflections
  • For a ray hit on a reflective surface, the reflected ray can be calculated and traced into the scene
  • In a first simple model, we assume that the radiance at a surface point is the sum of
    • the direct light from the light source
    • plus the radiance contribution from the reflected ray
  • The ray tracer only stops at non-reflective surfaces. Therefore, the number of reflections must be limited, otherwise, there might be an endless loop

Reflections

  • According to the law of reflection, the reflection direction has the same angle to the surface normal as the incidence direction
    Angle of incidence = Angle of reflection
  • The reflection direction $\mathbf{r}$ is therefore calculated from the surface normal $\mathbf{n}$ and the incidence direction $\mathbf{r}_{\mathrm{\small in}}$ by:
    $\mathbf{r}_{\mathrm{\small out}} = \mathbf{r}_{\mathrm{\small in}} - 2 \, \langle \mathbf{r}_{\mathrm{\small in}} \cdot \mathbf{n}\rangle \,\mathbf{n}$
    reflect_angles
    $\mathbf{r}_{\mathrm{\small in}}$
    $\mathbf{r}_{\mathrm{\small out}}$
    $\mathbf{n}$
  • GLSL provides the function reflect to calculate the reflection direction:
    vec3 r_out = reflect(r_in, normal);

Reflections

  • Reflections can be implemented very easily with recursive function calls:
    trace(level, ray, &color) { // THIS IS PSEUDOCODE!!!
      if (intersect(ray, &hit)) { 
        shadow = testShadow(hit);
        directColor = getDirectLight(hit, shadow);
        if (reflectionFactor > 0.0 && level < maxLevel) {
          reflectedRay = reflect(ray, hit.normal);
          trace(level + 1, reflectedRay, &reflectionColor); // recursion
        }
        color = color + directColor + reflectionFactor * reflectionColor;
      } else {
        color = backgroundColor;
      }
    }
  • However, when working with ray tracing shaders, the number of recursive function calls should be kept as low as possible.

Reflection without Recursion

  • Fortunately, the same result can also be achieved without recursion
    trace(ray, &color) { // THIS IS PSEUDOCODE!!!
      nextRay = ray;  
      contribution = 1.0; level = 0;
      while (nextRay && level < maxLevel) {
        if (intersect(nextRay, &hit)) {
          shadow = testShadow(hit);
          directColor = getDirectLight(hit, shadow);
          if (reflectionFactor > 0.0) {
            reflectedRay = reflect(nextRay, hit.normal);
            nextRay = reflectedRay;
          } else {
            nextRay = false;
          }
        } else {
          directColor = backgroundColor;
          nextRay = false;
        }
        color = color + contribution * directColor;
        contribution = contribution * reflectionFactor;
        level = level + 1;
      }
    }

Example: Reflections without Recursion

example_reflections

Example: Reflections without Recursion

struct RayPayloadType {
  vec3 directLight;
  vec3 nextRayOrigin;
  vec3 nextRayDirection;
  float nextReflectionFactor;
  bool shadowRayMiss;
}; // type of the "payload" variable

...

void main() { /**** RAY GENERATION SHADER ****/

  // compute the texture coordinate for the output image in range [0.0, 1.0]
  vec2 texCoord = (vec2(gl_LaunchIDEXT.xy) + 0.5) / vec2(gl_LaunchSizeEXT.xy);

  // camera parameter
  float aspect = float(gl_LaunchSizeEXT.x) / float(gl_LaunchSizeEXT.y);
  vec3 rayOrigin = camPos;
  vec3 rayDirection = getCameraRayLookAt(20.0, aspect, camPos, 
                                                 camLookAt, camUp, texCoord);
  
  uint rayFlags = gl_RayFlagsNoneEXT; // no ray flags
  float rayMin = 0.001; // minimum ray distance for a hit
  float rayMax = 10000.0; // maximum ray distance for a hit  
  uint cullMask = 0xFFu; // no culling
  
  // init ray and payload
  payload.nextRayOrigin = rayOrigin;
  payload.nextRayDirection = rayDirection;
  payload.nextReflectionFactor = 1.0;
  float contribution = 1.0;
  vec3 color = vec3(0.0, 0.0, 0.0);
  int level = 0;
  const int maxLevel = 5;
  
  // shot rays
  while(length(payload.nextRayDirection) > 0.1 && 
              level < maxLevel && contribution > 0.001) {
    // Submitting the camera ray to the acceleration structure traversal.
    // The last parameter is the index of the "payload" variable (always 0)
    traceRayEXT(topLevelAS, rayFlags, cullMask, 0u, 0u, 0u, 
            payload.nextRayOrigin, rayMin, payload.nextRayDirection, rayMax, 0);
    color += contribution * payload.directLight;
    contribution *= payload.nextReflectionFactor;
    level++;
  }

  gsnSetPixel(vec4(color, 1.0));
}

void  main() { /**** CLOSEST-HIT SHADER ****/
  
  // get mesh vertex data in object space
  vec3 p0, p1, p2;
  gsnGetPositions(gl_InstanceID, gl_PrimitiveID, p0, p1, p2);
  vec3 n0, n1, n2;
  gsnGetNormals(gl_InstanceID, gl_PrimitiveID, n0, n1, n2);
  vec2 t0, t1, t2;
  gsnGetTexCoords(gl_InstanceID, gl_PrimitiveID, t0, t1, t2);

  // interpolate with barycentric coordinate
  vec3 barys = vec3(1.0f - baryCoord.x - baryCoord.y, baryCoord.x, baryCoord.y);
  vec3 localNormal = normalize(n0 * barys.x + n1 * barys.y + n2 * barys.z);
  vec3 localPosition = p0 * barys.x + p1 * barys.y + p2 * barys.z;
  vec2 texCoords = t0 * barys.x + t1 * barys.y + t2 * barys.z;

  // transform to world space
  mat3 normalMat;
  gsnGetNormal3x3Matrix(gl_InstanceID, normalMat);
  vec3 normal = normalize(normalMat * localNormal);
  vec3 position = gl_ObjectToWorldEXT * vec4(localPosition, 1.0);

  vec3 lightDir = normalize(lightPos - position);
  
  // prepare shadow ray
  uint rayFlags = gl_RayFlagsTerminateOnFirstHitEXT | 
                             gl_RayFlagsSkipClosestHitShaderEXT;
  float rayMin     = 0.001;
  float rayMax     = length(lightPos - position);  
  float shadowBias = 0.001;
  uint cullMask = 0xFFu;
  float frontFacing = dot(-gl_WorldRayDirectionEXT, normal);
  vec3 shadowRayOrigin = position + sign(frontFacing) * shadowBias * normal;
  vec3 shadowRayDirection = lightDir;
  payload.shadowRayMiss = false;

  // shot shadow ray
  traceRayEXT(topLevelAS, rayFlags, cullMask, 0u, 0u, 0u, 
         shadowRayOrigin, rayMin, shadowRayDirection, rayMax, 0);
  
  // diffuse shading (direct light)
  vec3 radiance = ambientColor; // ambient term
  if(payload.shadowRayMiss) { // if not in shadow
    float irradiance = max(dot(lightDir, normal), 0.0);
    if(irradiance > 0.0) { // if receives light
      radiance += baseColor * irradiance; // diffuse shading
    }
  }  
  payload.directLight = radiance;
  
  // compute reflected ray (prepare next traceRay)
  float reflectionFactor = 0.25;
  if(reflectionFactor > 0.0) {
    payload.nextRayOrigin = position;
    payload.nextRayDirection = reflect(gl_WorldRayDirectionEXT, normal);
    payload.nextReflectionFactor = reflectionFactor;
  } else {
    // no more reflections
    payload.nextRayOrigin = vec3(0.0, 0.0, 0.0);
    payload.nextRayDirection = vec3(0.0, 0.0, 0.0); 
  }
}

void main() { /**** MISS SHADER ****/
  // set color to black
  payload.directLight = vec3(0.0, 0.0, 0.0);
  // shadow ray has not hit an object
  payload.shadowRayMiss = true;
  // no more reflections
  payload.nextRayOrigin = vec3(0.0, 0.0, 0.0);
  payload.nextRayDirection = vec3(0.0, 0.0, 0.0);
}

Distributed Ray Tracing: Anti-Aliasing

example_antialiasing
  • To prevent aliasing due to undersampling, we can send multiple rays per pixel
  • The random position of the ray inside the pixel must be uniformly distributed
  • By averaging the contributions of the rays, the correct color value for the pixel can be determined
  • If the previous average value is saved in the previous image, the new average value can be calculated as follows:
    vec4 previousAverage = gsnGetPreviousPixel();
    vec3 newAverage = (previousAverage.rgb * float(frameID) + payload.color) / float(frameID + 1);
    gsnSetPixel(vec4(newAverage, 1.0));

Halton Sequence

Sample positions of the 2D Halton sequence
halten2d
$y = h_3$
$x = h_2$
  • To prevent repeated sampling at the same position, pseudo-random sequences (such as the Halton sequence) are often used in practice
  • Halton sequence (in several dimensions)
    ${\small \left(h_2(n), h_3(n), h_5(n), h_7(n), \dots, h_{p_i}(n)\right)^\top}$
    where $p_i$ is the $i$-th prime number and $h_r(n)$ is computed by mirroring the numerical value $n$ (to the base $r$) at the decimal point
  • The generated samples have a uniform distribution
  • A sequence of arbitrary length is possible
  • Example:
    $h_2((26)_{10})= h_2((11010)_2) = (0.01011)_2 = 11/32$
    $h_3((19)_{10})= h_3((201)_3)= (0.102)_3=11/27$

Halton Sequence

Index $n$ Numerical value (Base 2) Mirrored $h_2(n)$
110.1 = 1/2 0.5
2100.01 = 1/4 0.25
3110.11 = 3/4 0.75
41000.001 = 1/8 0.125
51010.101 = 1/2 + 1/8 0.625
61100.011 = 1/4 + 1/8 0.375
71110.111 = 1/2 + 1/4 + 1/8 0.875
halten2

Halton Sequence

Index $n$ Numerical value (Base 3) Mirrored $h_3(n)$
110.1 = 1/3 0.333
220.2 = 2/3 0.666
3100.01 = 1/9 0.111
4110.11 = 1/3 + 1/9 0.444
5120.21 = 2/3 + 1/9 0.777
6200.02 = 2/9 0.222
7210.12 = 1/3 + 2/9 0.555
8220.22 = 2/3 + 2/9 0.888
halten2

Hammersley Sequence

  • If the number $N$ of the sample values is known in advance, the Hammersley sequence can also be used, for which the first dimension can be computed faster
  • Hammersley sequence (in several dimensions)
    $\left(\frac{n}{N}, h_2(n), h_3(n), h_5(n), h_7(n), \dots, h_{p_i}(n)\right)^\top$
hammersley2d
$y = h_2$
$x = \frac{n}{N}$

Distributed Ray Tracing: Anti-Aliasing

example_antialiasing

Distributed Ray Tracing: Soft Shadows

softshadow
Area light
(Umbra)
Full shadow
Penumbra
(Partial shadow)
Penumbra
(Partial shadow)
Camera
$\frac{5}{8}$
  • Distributed Ray Tracing (Cook et al., Siggraph 1984) is not only suitable for anti-aliasing but it can also be used to create soft shadows
  • The position on an area light source is varied randomly (with a uniform distribution)
  • Further applications: glossy surfaces, motion blur, depth of field, etc.

Distributed Ray Tracing: Soft Shadows

example_softshadows

Path Tracing

path_tracing
  • If distributed ray tracing was used for indirect light (e.g. for indirect diffuse reflection), this would quickly result in problems because the number of rays grows exponentially
    • $N$ rays for 1st indirection
    • $N^2$ rays for 2nd indirection
    • $N^3$ rays for 3rd indirection and so on
  • The indirections of a higher degree have a smaller contribution to the image but use significantly more rays than the lower indirections
  • Solution: Path Tracing (Kajiya, Siggraph 1986)
    • At each hit, select only 1 ray from all possible rays and continue tracing. This creates 1 path per primary ray.
    • Evaluate $N$ different paths per camera pixel and calculate the mean value per pixel
    • Advantage: The same computational effort for all indirections

Path Tracing

path_tracing
  • In a simplified model, we first assume that the radiance at a point is the sum of the direct component and the
    • indirect diffuse reflection (in all directions)
    • ideal reflection
    • ideal refraction
  • I.e., either the ray for indirect diffuse reflection, ideal reflection, or ideal refraction is traced
  • Better physical models will be presented later (rendering equation, BRDF, Fresnel, ...)

Path Tracing: Cornell Box

example_pathtracing.png
Beispiel: Path Tracing
example_path_tracing_cycles_256
Referenz: Blender Cycles
  • The ray tracing shader implementation (left) and the blender / cycles reference (right) both use 256 samples per pixel
  • Both renderings are more or less the same. In the Cycles version, the sphere has an additional reflective component.

Are there any questions?

questions

Please notify me by e-mail if you have questions, suggestions for improvement, or found typos: Contact

More lecture slides

Slides in German (Folien auf Deutsch)