Shadows are important in making a scene look realistic. A scene without shadows feels flat and surreal. Have a look at the following image. It was taken during a summer in Hawaii when the sun was directly above.
Figure 1: Real World Scene without shadows

The poles look like they are rendered in an old game that does not support shadows. The human brain perceives it as weird. An existing technique for adding shadows to our scene is called Shadow Mapping. This technique has existed for some time and has been expanded to Cascaded Shadow Mapping and Reflective Shadow Mapping. We will use this technique as a basis for our Reflective Shadow Maps.

## Source

A shadow is the absence of light, i.e. a ray of light cannot directly travel from a light source to the fragment we are lighting. One can test if a fragment can be seen from a light via Shadow Mapping. Shadow Mapping uses a depth buffer of the entire scene, rendered from the perspective of the light to determine if a fragment can be seen by the light.

The shadow map is typically a 32-bit floating point depth texture. We must do an additional pre-pass to fill this texture. Different types of lights require different techniques to correctly simulate their shadow-casting behavior. In the discussed implementation only shadow mapping for directional lighting was implemented. Directional lights do not have a position and cast light rays that are parallel to each other, simulating an infinitely far away point light. We can correctly simulate the projective properties of these lights using an orthographic projection matrix. Using a projection matrix like this will map all geometry within a bounding box to clip space. An interesting property of the orthographic matrix is that the geometry will appear to all lie in the same plane. This is perfect for our infinitely far away light source.

We must create some simple shaders and a render target of the desired size and format. One should typically decide on a single 32-bit floating point texture. Higher resolution render targets will make the shadows less jagged. However this is at the cost of more time needed to render the shadow map and a higher memory footprint. Note that extreme resolutions can cause undersampling artifacts.

Rendering a shadow map only requires a vertex shader. This shader will transform our vertices from local space into orthographic projection space. A fragment shader is not required; The output merger will write the depth values outputted by the rasterizer to the bound depth buffer. Due to the low complexity of this vertex shader rendering complex scenes is relatively cheap. Once could consider adding occlusion culling to this pass to reduce the number of draw calls.
Figure 2: Depth Buffer

Finally, we can use the resulting depth map to determine if a fragment is being lit or not. When shading a pixel in our final lighting pass, we transform that pixel into light Normalized Device Coordinates. Using the resulting coordinates we sample our depth map for a depth value. If the sampled value is smaller than the depth value of our reprojected pixel, the pixel is in shadow and should not be shaded by this light.

After running our application with this shader we can see that the general idea is there.

However, there are severe artefacts in the areas that are lit. This phenomenon is called Shadow Acne and is caused by the discrete sample steps in our shadow map. This means that the pixels in the shadow map are mapped to a space in the scene. This space is, generally, larger then the space the pixel could actually represent. Thus, the depth value stored in the depth map will be used for all the points in the space that the pixel is mapped to.
Figure 4: Discrete Sampling

Fortunately, this is easily fixed in DirectX 12, we can make the hardware add a depth bias when rendering our shadow map. We can even set a bigger bias for slopes and have the hardware clamp the bias to prevent artifacts.

#### Peter Panning

Adding a bias to our sampled depth is not without consequence. A new artifact is introduced, commonly referred to as Peter Panning. The bigger the bias the more severe the artifact will be. Care should be taken when selecting a proper bias.
Figure 5: Adding an increasingly bigger bias. Left: small bias; right: big bias.

Again DirectX 12 saves us. The hardware adds the minimum representable float value above 0 to the depth values. These values are very small and will, generally, cause zero to no artifacts.

## Jagged Edges

Due to the discrete sampling nature of our shadow map, the edges of the shadows are jagged. Multiple pixels in our world are being mapped to the same pixel in the shadow map. Increasing the resolution of the shadow map will reduce this effect, at the cost of higher rendering times. One could consider calculating a tighter bounding box around the geometry as well. A more advanced version of Shadow Mapping, Cascaded Shadow Mapping, attempts to reduce this problem. A hierarchy of bounding boxes is used, a small one for objects that are close by and increasingly bigger ones for geometry that is further away from the camera.
Figure 6: Jagged Edges

## Percentage Closer Filtering

Percentage Closer Filtering (PCF) can be used to smooth our shadows. The idea is to also sample the pixels around the pixel we are shading, this allows us to determine a shadow percentage. Luckily for us, we can make our hardware do this for us. In DirectX 12, we have to create a filter that is set to comparison filtering so we can use the SampleCmp function in our shader. SampleCmp will make our hardware sample the depth buffer multiple times (depended on the filter mode) and blend the individual results together to return a percentage.
Figure 7: PCF

## Poisson Sampling

We can improve our shadows even further by using Poisson sampling. Instead of taking a single comparison sample, we take n samples. Poisson Sampling is a very good candidate for this. Poisson sampling generates a set of points that are tightly packed but lie a minimum distance apart. This gives a nice even distribution of samples for a more natural effect.
Figure 8: PCF + Poisson

Reflective Shadow Mapping (RSM) is a technique that attempts to generate plausible one-bounce indirect illumination. The idea is that all geometry visible in the shadow map of a light will reflect light back into the scene. Therefore a shadow map will contain all the data required for creating an indirect lighting effect.
Figure 9: Lucy with RSM

## Definitions and Assumptions

We assume all surfaces are diffuse reflectors. An RSM will store the depth value $$d_{p}$$, the world space position $$x_{p}$$, the normal $$n_{p}$$, and the reflected radiant flux $$\Phi_{p}$$. Using these values we can interpret every pixel in the RSM as a Virtual Point Light (VPL) that illuminates the scene indirectly. One could decide on storing the world space positions in an additional buffer. However, we can avoid costly memory accesses by using the depth values to reconstruct world space positions. The flux is used to determine the brightness and the color, while the normal will define the hemisphere over which the light is emitted. If we assume that the VPL is infinitely small we can define the radiant intensity emitted into direction $$\omega$$ as: $$I_{p}(\omega) = \Phi_{p} max\{0, n_{p} \cdot \omega\}$$ The irradiance for a surface point $$x$$ with normal $$n$$ for pixel light $$p$$ is thus: $$E_{p}(x, n) = \frac{max\{0, n_{p} \cdot x - x_{p}\} max\{0, n \cdot x_{p} - x\}}{||x - x_{p}||^4}$$

## Generating the Reflective Shadow Map

An RSM is generated in a similar manner as a regular Shadow Map. Depth values $$d_{p}$$ go into the bound depth buffer, normals $$n_{p}$$ go into a normal buffer, and $$\Phi_{p}$$ in a 128-bit float buffer. Flux is simply calculated by multiplying the light color with the material color. For spotlights, we have to take the angle into account. No distance attenuation or receiver cosine should be computed. The resulting flux buffer will look like an unshaded image for directional lights. We expand our Shadow Map pass with a pixel shader. This pixel shader will compute a normal and a flux value to store in our RSM textures. If a normal map is present, the normal map is sampled and the resulting value is expanded into a usable world space normal. Finally, all the values needed for the RSM are written into their respectively bound render targets.
Figure 10: Normals (left), Flux (middle), Depth (right)

## Evaluating the Reflective Shadow Map

The indirect irradiance for surface point x can now be approximated by summing up the illumination from all lights in the reflective shadow map: $$E(x,n) = \sum_{pixels p}^{} E_{p} (x, n)$$ For a 512x512 RSM, this would mean having to sample 3 textures 262144 times. This is simply not feasible, even on modern hardware. We must apply some importance sampling to drastically reduce the number of texture accesses. Consider the following image. Pixel $$x$$ is invisible from the perspective of the camera but will be lit by virtual point lights in the RSM. We see that VPL $$x_{1}$$ and $$x_{2}$$ are both casting light on $$x$$. However, due to distance attenuation $$x_{1}$$ will contribute more than $$x_{2}$$, this means that points closer to $$x$$ should be prioritized. If we map this into an RSM the VPLs that are closest in 3D world space are likely to be close in the 2D RSM space. We suggest sampling on a disk around the pixel being shaded. Note that the sampled value should be scaled according to the radius of the disk.
Figure 11: Lights near the point being shaded should be prioritized

Furthermore, we do not take occlusion into account, $$x$$ is receiving light from $$x_{2}$$. This can lead to very wrong results, however, we feel that this is sufficient to generate the subtle indirect lighting effects. For sampling on a disk, we suggest filling a large buffer with equally distributed random floating point values on the CPU and uploading it to the GPU once. This will save us time and prevent flickering.

## Future Improvements

A paper online suggested rendering the RSM for a low-resolution version and linearly interpolating the results for the final image. This works very well for game scenes that are spatially coherent. I decided not to spend time implementing this as the VPL cloud will be used in my Light Propagation Volume implementation.
Figure 12: Final RSM Result