Deferred Rendering – Part II: Handling Normals

This week work went into adding normal data to the G-Buffer and Light passes. I left normal data out while I was still implementing the architecture for the deferred renderer, so in the previous post we were only doing ambient lighting.

The Stanford Dragon 3D model rendered with a single directional light by the new deferred renderer.

The Stanford Dragon 3D model rendered with a single directional light by the new deferred renderer.

Adding normal data required a few tweaks to several components of the renderer.

First, I had to extend my Vertex Buffers to accommodate an extra 3 floating point values (the normal data). If you’ve been following these post series, you’ll remember from this post that we designed our Interleaved Arrays with the format XYZUVRGBA. I’ve now expanded this format to include per-vertex normals. The new packing is: XYZNNNUVRGBA, where NNN denotes the normal x, y and z values.

Next, the G-Buffer had to be extended to include a third render target: a normal texture. Here is where we stored interpolated normal data calculated during the Geometry pass.

Finally, we extend the Light pass shader to take in the normal texture and sample the per-fragment world-space normal and use it in its light calculation.

The test box renderer with a hard directional light with direction (-1,-1,0).

The test box renderer with a hard directional light with direction (-1,-1,0).

For testing, I used our usual textured box model and directional light with a rotating direction, as depicted above. The box is great for these kinds of tests, as its faces are parallel to the Ox, Oy and Oz planes with opposite value normals.

Soon enough, the test revealed a problem: faces with negative normals where not being lit. Drawing the normal data to the framebuffer during the light pass helped narrow down the problem tremendously. It turns out, normal data sampled from the normal texture was clamped to the [0-1] range. This meant that we could not represent normals with negative components.

Going through the OpenGL documentation for floating point textures revealed the problem. According to the docs, if the data type of a texture ends up being defined as fixed-point, sampled data will be clamped. The problem was caused by a single incorrect parameter in the glTexture2D calls used to create the render textures: the data type was being set to GL_UNSIGNED_BYTE for floating point textures.

The fix was simple enough: set the data type as GL_FLOAT for floating point textures, even if the internal format is already floating-point.

Our old forktruck scene, loaded into the Editor and rendered using the first iteration of the deferred renderer.

Our old forktruck scene, loaded into the Editor and rendered using the first iteration of the deferred renderer.

With this fix in place, I could confirm that we are able to light a surface from any direction from our simple Light pass. The image above shows our old warehouse scene loaded by Editor, and the forktruck is being lit by the deferred renderer!

There are now several paths where work can continue: we can extend our serialization format to include material data, we can continue improving the visuals or we can start testing our code on Mac. Stay tuned for more!

Deferred Rendering – Part I

This week, work went into completing the initial implementation of a Deferred Renderer for Vortex. Every feature we had been incorporating into the Engine so far was building up to this moment and so, this time around, I finished writing all the components necessary to allow rendering geometry to the G-Buffer and then doing a simple light pass.

Initial implementation of deferred rendering in the Vortex V3 Engine.

Initial implementation of deferred rendering in the Vortex V3 Engine.

The image above shows the current functionality. Here, a box is created and its material is set to the “Geometry Pass” built-in shader. Other engines usually refer to this shader with another name, but ultimately, the purpose is always the same: populate the G-Buffer with data.

Once we’ve assigned the shader to the material, we attach a diffuse texture to it. This will be the object’s albedo value that we will write into the G-Buffer together with the corresponding position and normal data. Multi-Render Target support in Vortex is fundamental to efficiently fill the G-Buffer in only one render pass.

Next, we create a light entity. We mentioned the new light interface in Vortex from February. Light entities are gathered by the V3 renderer and, depending on their type, they are drawn directly on the framebuffer. The light shader is selected depending on the light type and it will access the readily available G-Buffer data to shade the scene.

The Postprocessing Underpinnings in the new renderer turned out to be essential for testing new ideas and debugging the implementation process every step of the way.

I’m quite happy with the results. Now that we have a complete vertical slice of the deferred renderer, we can iteratively add new features on top of it to expand its capabilities. From the top of my head, adding support for more light types and transparent meshes are two major features that I want to tackle in the upcoming weeks.

G-Buffer

This week I started working on the G-Buffer implementation for the deferred renderer.

A test of the G-Buffer. Colors correspond to the coordinates of the vertices in world space.

A test of the G-Buffer. Colors correspond to the coordinates of the vertices in world space.

The G-Buffer pass consists in the first half of any deferred shading algorithm. The idea is that, instead of drawing shaded pixels directly on the screen, we will store geometric information for all our opaque objects in a “Geometry Buffer” (G-Buffer for short). This information will be used later in a lighting pass.

In sharp contrast to forward rendering, where the output of our rendering is the already-lit pixels, lighting calculations here are deferred to a later pass, hence the term deferred rendering.

The image above shows a simple test for the G-Buffer implementation in Vortex. In this test, we draw a number of boxes, with each vertex being colored according to their position in world space as read from the G-Buffer.

Notice how as vertices move left to right, we gain red (x translates to red). Similarly, as vertices move from bottom to top, we gain green (y translates to green). We cannot see it in this picture, but as vertices move from the back to the front, we also gain blue (z translates to blue).

Through this test we can also verify that moving the camera does not change the colors at all. This helps validate that our position data are indeed in world space.

This is going to be all for today. Next week, work will continue in the G-Buffer implementation. Stay tuned for more!

Multi-Render Targets

This week, work went into testing Multi-Render Target support. Usually abbreviated MRT, Multi-Render Targets allow our shaders’ output to be written to more than one texture in a single render pass.

A test of Multi-Render Targets in Vortex 3. Texels from the left half of the box come from a texture, whereas texels from the right half come from another.

A test of Multi-Render Targets in Vortex 3. Texels from the left half of the box come from a texture, whereas texels from the right half come from another.

MRT is the foundation of every modern renderer, as it allows building complex visuals without requiring several passes over the scene. With MRT, we can simply specify all the textures that a render pass will be writing to and then, from the shader code, write to each out variable.

MRT is standard in both Core OpenGL and OpenGL ES 3.0, so opting into it will not preclude the renderer from running on mobile hardware. There were only a couple of minor adjustments that had to be done to our current shaders in order to support MRT.

In particular, OpenGL 3.3 changed the way that fragment shaders write to multiple attachments. Prior to OpenGL 3.3, we would write to the GLSL built-in variable glFragData[i] to specify the output we were writing to. Starting in OpenGL 3.3, we explicitly specify the layout description for our out variables in the fragment shader using the seemingly weird syntax: layout(location = i) out vec3 attachment_i; and then writing to that variable directly.

In order to achieve this, we had to increase the GLSL version to 330. There was the option to stay at version 150 and use a GLSL extension, but we are trying to stick to standard out-of-the-box OpenGL as much as possible, so this was not an option.

A test of the Multi-Render Target functionality in Core OpenGL 3.3

A test of the Multi-Render Target functionality in Core OpenGL 3.3

In order to test MRT I designed a simple test. The output can be seen above. In these images, the shader used to render the box writes the red and green channels to two different texture attachments.

The blit pass that draws the framebuffer to the screen samples both textures and uses pixels coming from the red texture to the left half of the screen and pixels from the green texture for the right half. This generates the visual effect of the box being painted in two colors.

I’m very happy with the results of this test. MRT is a very approachable feature and there is no reason not to use it if you are targeting recent hardware.

The next steps will be to clean up the internal Framebuffer API even more to make MRT support more flexible, and to start working on implementing the G-Buffer. As usual, stay tuned for more!