By Peter Sikachev


Transparent objects have always been a challenge to render in video games. Firstly, performance is a major issue. Transparent objects rendering may result in a lot of overdraw in a single fragment.

Secondly, transparent objects are not compatible with all the rendering techniques. In deferred rendering engines one needs to create a special forward pass for them; transparent objects do not also work correctly with any post-processes that require depth information: screen-space ambient occlusion, depth-of-field, screen-space reflections, to name a few. Self-shadowing of transparent objects could also be challenging.

Last but not least, sorting is the major issue for transparency. While it is trivial to sort instances of the same object (e.g., particles within a particle system) between each other, proper compositing of different transparent objects can be very difficult (see Figures 1 and 2). For instance, a particle system that is partially covered by volumetric effect can not be trivially sorted, because some particles may end up in front of the volumetric effects and others behind.


Figure 1. Particle system and instanced transparent objects.

Figure 2. Transparent object inside particle system.

In this blog post we propose an approach that could be utilized to address some of these issues. Particularly, we examine sorting and self-shadowing methods.


We introduce an extra 2-channel render target which we call depth proxy. In this texture, we store min and max linear depth of a transparent entity. When another transparent entity is being rendered, it fetches this texture to correctly superimpose that entity over itself, as shown in Figure 3.


Figure 3. DPTR in a nutshell.

We utilize the independent blend feature, available DirectX 10.1 and later graphics APIs. Thus, we are able to fill the depth proxy in the same pass we render the transparent entity itself, without introducing additional draw calls. The depth proxy render target is rendered with the MIN blend function. This allows to keep the global minimum in the min depth channel. As custom per-channel blending function is not available, we have to use the same MIN blend for the max depth channel. To make the MIN blending function selecting the max depth, we negate the depth when we output it to the max channel and negate it once again during the fetch.

When rendering another transparent entity that needs to be sorted with the one, rendered with the depth proxy, we fetch both the color/opacity and the depth proxy textures. The two transparent entities are then superimposed assuming that the entity opacity accumulates linearly from min depth to max depth. The following code shows how depth factor is calculated:

float refZ = viewSpaceCoord.z;
float minZ = DepthProxyMap.Sample(textureSampler, texCoords).r;
float maxZ = -DepthProxyMap.Sample(textureSampler, texCoords).g;
float depthFactor = saturate((refZ - minZ) / abs(maxZ - minZ));

In the subsequent sections we present implementation details and results of 3 use cases for the technique. However, the technique is not limited to those cases, and we encourage the reader to explore other applications.


Case Study: Particles-Transparent Meshes Sorting
One of the most irritating problems is sorting between particles and other transparent objects. While one can theoretically sort particle emitters and transparent meshes (if we break instancing, render both in the same buffer etc.), it is almost impossible to sort individual particles and transparent meshes. Thus, popping would be really noticeable when a transparent object moves inside a particle system.

We can address this issue using DPTR. Thus, we bind an additional RG16F render target when rendering particles. Then, particle pixel shader is modified. In addition to color output, linear view-space depth is output to the DPTR render target (the same value goes to both channels, y channel is negated).

Transparent meshes, in their turn, sample both color and DPTR textures. We use the following code to superimpose the particles and transparent meshes:

color.a = 1.0f - (1.0f - color.a) * depthFactor;
output.fragColor.rgb = lerp(color.rgb, output.fragColor.rgb, color.a);

The overhead of the method is about 20% of the particle pass – provided particles are rendered into a RGBA16F render target. That comes primarily due to the case that particles are ROP/output bandwidth bound. The obvious limitation of the technique is having several particle effects (and a significant distance between them) projecting to the same pixel and transparent meshes in between.


Case Study: Particles Self-Shadowing
Particles like smoke cast semi-transparent shadows. While it is trivial to render particle shadows on solid surfaces, it is not that trivial to simulate particle self-shadowing. However, DPTR can be used to solve this task.

We render the opacity into an R8UNORM render target. Just as for the Particles-Transparent Meshes Sorting, we bind an additional RG16F render target when rendering particle shadow. Once translucent shadow is rendered, we use the following shading term in the particle base pass:

As particle billboards mismatch in the view and the light space, our depth proxies do not completely match the actual particles. When a particle disappears, min/max depth changes abruptly, the self-shadow flickers.

translucentShadowTerm = 1.0f - pow(depthFactor, opacityPower) *
translucencyMap.SampleLevel(bilinearSampler, shadowTC.xy, 0.0f);

To mitigate that, we employ a simple self-shadow reprojection. We do it in the particle shadow pixel shader, as shown in the following code snippet:

output.minMaxDepth = min(output.minMaxDepth,
minMaxParticleDepthPrev.SampleLevel(pointSampler, texCoord, 0.0f).rg + 0.1f);

To assess performance impact, we made a test with ~10-20k particles, 2k×2k shadow map on GeForce 1060 GTX. While a traditional transparent shadow map takes 0.25 ms, self-shadowing with DPTR takes 0.33 ms, thus yielding ~30% overhead.


Case Study: Volumetric Effects-Particles Sorting
Ray marching has been used for a long time now to simulate light scattering in a participating media. However, this method has a number of flaws. For instance, it cannot be properly sorted with other transparent objects that are inside it: particles, transparent meshes etc.

To overcome this, froxel volume lighting was introduced. Although they address a lot of issues, current practical implementations are too low res for certain effects – such as flashlight beam, for instance.

During the raytracing, we output ray entry point as min depth and ray exit point as max depth. We superimpose volumetric effects and other transparent objects using the same equation as in particle-transparent sample.

Performance-wise, we haven’t experienced any implications for using DPTR with volumetric effects. In our case, volumetrics were not ROP/output bandwidth bound, and particles were not texture bound. However, your mileage may definitely vary.


In this article we presented a novel method called Depth Proxy Transparency Rendering. At a minimal overhead, it allows to create a proxy depth range for a transparent effect, which then can be used for various applications. We showcase DPTR on a couple of transparency sorting/superimposing applications and as a self-shadowing method. We encourage a reader to explore new use cases for the proposed technique.