# Deferred Rendering – Part I

This week, work went into completing the initial implementation of a Deferred Renderer for Vortex. Every feature we had been incorporating into the Engine so far was building up to this moment and so, this time around, I finished writing all the components necessary to allow rendering geometry to the G-Buffer and then doing a simple light pass.

Initial implementation of deferred rendering in the Vortex V3 Engine.

The image above shows the current functionality. Here, a box is created and its material is set to the “Geometry Pass” built-in shader. Other engines usually refer to this shader with another name, but ultimately, the purpose is always the same: populate the G-Buffer with data.

Once we’ve assigned the shader to the material, we attach a diffuse texture to it. This will be the object’s albedo value that we will write into the G-Buffer together with the corresponding position and normal data. Multi-Render Target support in Vortex is fundamental to efficiently fill the G-Buffer in only one render pass.

Next, we create a light entity. We mentioned the new light interface in Vortex from February. Light entities are gathered by the V3 renderer and, depending on their type, they are drawn directly on the framebuffer. The light shader is selected depending on the light type and it will access the readily available G-Buffer data to shade the scene.

The Postprocessing Underpinnings in the new renderer turned out to be essential for testing new ideas and debugging the implementation process every step of the way.

I’m quite happy with the results. Now that we have a complete vertical slice of the deferred renderer, we can iteratively add new features on top of it to expand its capabilities. From the top of my head, adding support for more light types and transparent meshes are two major features that I want to tackle in the upcoming weeks.

# G-Buffer

This week I started working on the G-Buffer implementation for the deferred renderer.

A test of the G-Buffer. Colors correspond to the coordinates of the vertices in world space.

The G-Buffer pass consists in the first half of any deferred shading algorithm. The idea is that, instead of drawing shaded pixels directly on the screen, we will store geometric information for all our opaque objects in a “Geometry Buffer” (G-Buffer for short). This information will be used later in a lighting pass.

In sharp contrast to forward rendering, where the output of our rendering is the already-lit pixels, lighting calculations here are deferred to a later pass, hence the term deferred rendering.

The image above shows a simple test for the G-Buffer implementation in Vortex. In this test, we draw a number of boxes, with each vertex being colored according to their position in world space as read from the G-Buffer.

Notice how as vertices move left to right, we gain red (x translates to red). Similarly, as vertices move from bottom to top, we gain green (y translates to green). We cannot see it in this picture, but as vertices move from the back to the front, we also gain blue (z translates to blue).

Through this test we can also verify that moving the camera does not change the colors at all. This helps validate that our position data are indeed in world space.

This is going to be all for today. Next week, work will continue in the G-Buffer implementation. Stay tuned for more!

# Multi-Render Targets

This week, work went into testing Multi-Render Target support. Usually abbreviated MRT, Multi-Render Targets allow our shaders’ output to be written to more than one texture in a single render pass.

A test of Multi-Render Targets in Vortex 3. Texels from the left half of the box come from a texture, whereas texels from the right half come from another.

MRT is the foundation of every modern renderer, as it allows building complex visuals without requiring several passes over the scene. With MRT, we can simply specify all the textures that a render pass will be writing to and then, from the shader code, write to each out variable.

MRT is standard in both Core OpenGL and OpenGL ES 3.0, so opting into it will not preclude the renderer from running on mobile hardware. There were only a couple of minor adjustments that had to be done to our current shaders in order to support MRT.

In particular, OpenGL 3.3 changed the way that fragment shaders write to multiple attachments. Prior to OpenGL 3.3, we would write to the GLSL built-in variable glFragData[i] to specify the output we were writing to. Starting in OpenGL 3.3, we explicitly specify the layout description for our out variables in the fragment shader using the seemingly weird syntax: layout(location = i) out vec3 attachment_i; and then writing to that variable directly.

In order to achieve this, we had to increase the GLSL version to 330. There was the option to stay at version 150 and use a GLSL extension, but we are trying to stick to standard out-of-the-box OpenGL as much as possible, so this was not an option.

A test of the Multi-Render Target functionality in Core OpenGL 3.3

In order to test MRT I designed a simple test. The output can be seen above. In these images, the shader used to render the box writes the red and green channels to two different texture attachments.

The blit pass that draws the framebuffer to the screen samples both textures and uses pixels coming from the red texture to the left half of the screen and pixels from the green texture for the right half. This generates the visual effect of the box being painted in two colors.

I’m very happy with the results of this test. MRT is a very approachable feature and there is no reason not to use it if you are targeting recent hardware.

The next steps will be to clean up the internal Framebuffer API even more to make MRT support more flexible, and to start working on implementing the G-Buffer. As usual, stay tuned for more!

# Quick Procedural Geometry Notes

This past week was crazy busy and I didn’t have any time to sit down and code. Nonetheless, I had the time to work out the math and jot down a quick piece of pseudocode on how to procedurally generate a cone.

Math and Pseudocode for generating a cone procedurally.

Here, I’ve chosen to place the cone’s center at (0,0,0) in Object Space. This will allow me to always rotate the cone on its apex, something that will be extremely helpful for the effect we’re trying to achieve.

I haven’t had the time to test this idea in code yet, however, the plan is to build it into Vortex as part of the vtx::procedural package.

I’m not giving away any more details this week! You’ll have to stay tuned for more!

# Wrapping up 2016: Lessons Learned from working on Vortex

2016 marks a year where a lot of work went into both the Vortex Engine and the Vortex Editor. Working on both these projects during my free time has been great to continue to hone my C++ and OpenGL skills. In this post, I am going to do a quick retrospective on the work done and present a few lessons learned.

## Lessons on making such an UI-heavy application

Rendering a deserialized scene using the new renderer.

The way I decided to approach this work was to divide Vortex into two separate projects: the Vortex Engine project and a brand new Editor built on top of it. The Engine itself has been on-going since 2010.

Today, both projects have reached a level of maturity where there we could clearly have two engineers working full-time on them.

The Editor is definitely one of the driving forces that pushes the Engine into supporting more and more visual features. This is to provide the user more power of expression. The amount of work required for exposing these features to the outside world, however, is something that I did not expect.

Let’s walk through a simple example: selecting an Entity and editing it so we can change its properties. In order to do this we must:

1. Provide the user a means to select an Entity.
2. React to this, introspecting the selected Entity and its components.
3. Build and display a custom UI for each component in the Entity.
4. For each component, render its UI widgets and preload them the component’s UI.
5. Provide the user the means to change these properties.
6. Have the changes reflect in the 3D scene immediately.

This system can quickly grow into thousands of lines of code. Even if the code does not have the strenuous performance requirements of the rendering loop, we still need to develop responsive code with a good architecture that allows building more features on top of it.

The rewards from this effort are huge, however. The Editor UI is the main point of interaction of the user with Vortex and it’s the way that she can tell the Engine what she wants it to do. Good UI and, more importantly, good UX, are key in making the Editor enjoyable to the user.

## Lessons on going with C++11

I decided to finally do the jump and update the codebase from C++98 to C++11 and the min-spec for running the renderer from OpenGL ES 2.0 to Core OpenGL 3.3.

Going to C++11 was the right choice. I find that C++11 allows for more power of expression when developing a large C++ codebase and it provides several utilities for making the code look more clean.

There are a few takeaways from using C++11, however, that I think may not be as clear for people just getting started with this version of the language.

## Lessons on C++11 enum classes

I like enum classes a lot and I tend to use them as much as possible. There were several places through the legacy Vortex Engine code where old C-style structs and/or static const values that were used to declare configuration parameters did not look too clean. C++ enum classes helped wrap these while also keeping their enclosing namespace clean.

The only limitation I found was using enum classes for bitmasks. enum class members can definitely be cast to make them behave as expected. Doing this however is heavy-handing the type system and some may argue it does away with the advantages of having it.

Additionally, if you’re trying to implicitly cast a binary operator expression involving an enum class value into a bool, you are going to find a roadblock:

I like doing if (mask & kParameterFlag), as I find it more clear to read than having a mandatory comparison against zero at the end, and C++11 enum classes do not provide that option for me.

## Lessons on C++11 Weak Pointers

C++11 Shared Pointers (std::shared_ptr and std::weak_ptr) are great for an application like the Editor, where Entity references (pointers) need to be passed around.

Imagine this situation: we have an Entity with a few components that is selected. We have several UI components that are holding pointers to all these objects. Now, if the user decides to delete this entity or remove one of its components, how do we prevent the UI code from dereferencing dangling pointers? We would have to hunt down all references and null them.

Using C++11’s std::weak_ptr, we know that a pointer to a destroyed Entity or Component will fail to lock. This means that the UI controllers can detect situations where they are pointing to deleted data and graciously handle it.

## Lessons on C++11 Shared Pointers

Like any other C++ object, smart pointers passed by value will be copied. Unlike copying a raw pointer, however, copying a smart pointer is an expensive operation.

Smart pointers need to keep a reference count to know when it’s okay to delete the managed object and, additionally, C++ mandates that this bookkeeping be performed in a thread-safe fashion. This means that a mutex will have to be locked and unlocked for every smart pointer we create and destroy.

If not careful, your CPU cycles will be spent copying pointers around and you may have a hard time when you try to scale your Engine down to run on a mobile device.

I investigated this issue and found this amazing write up by Herb Sutter: https://herbsutter.com/2013/06/05/gotw-91-solution-smart-pointer-parameters/. The idea is to avoid passing shared pointers by copy to any function that does not intend to keep a long-term reference to the object.

Don’t be afraid of calling std::shared_ptr::get() for passing a pointer to a short, pure function that performs temporary work and only opt into smart pointers when you want to signal to the outside world that you want to share the ownership of the passed in pointer.

## Lessons on using Core OpenGL

OpenGL Logo. Copyright (C) Khronos Group.

Choosing to go for a specific minimum version of Core OpenGL helps root out all the questions that pop up every time you use anything outside OpenGL 1.1 and wonder if you should implement the ARB / EXT variants as well.

Core OpenGL 3.3 makes it easier to discover the optimal GPU usage path, as you are now required to use VBOs, VAOs, Shaders and other modern Video Card constructs. It also has the added advantage that it will make it so that legacy OpenGL calls (which should not be used anyways) will not work.

Differences in OpenGL, however, are still pervasive enough so that code that is tested and verified to work on Windows may not work at all on OSX due to differences in the video driver. Moving the codebase to a mobile device will again prove a challenge.

The lesson here is to never assume that your OpenGL code works. Always test on all the platforms you claim to support.

## In Closing

These were some of the most prominent lessons learned from working on Vortex this year. I am looking forward to continuing to work on these projects through 2017!

I think we’ve only scratched the surface of what the new renderer’s architecture can help build and I definitely want to continue developing the renderer to support more immersive experiences, as well as the Editor so it exposes even more features to the user!

Thank you for joining in though the year and, as usual, stay tuned for more!

# Shader Texture Mapping and Interleaved Arrays

Work continues on the new Rendering System for the Vortex Engine.

This week was all about implementing basic texture mapping using GL Core. The following image shows our familiar box, only this time, it’s being textured from a shader.

Texture Mapping via Interleaved Arrays and Shaders in the Vortex Engine.

A number of changes had to go into the Renderer in order to perform texture mapping. These touched almost all the layers of the Engine and Editor.

1. First, I wrote a “Single Texture” shader in GLSL to perform the perspective transform of the Entity’s mesh, interpolate its UV texture coordinates and sample a texture.
2. Second, I had to change the way the retained mesh works in order to be able to send texture coordinate data to the video card (more on this later).
3. Finally, I had to modify the Editor UI to allow selecting which shader is to be used when rendering an Entity.

So, regarding how to submit mesh texture coordinate data to the video card, because in OpenGL Core we use Vertex Buffer Objects (VBOs), it was clear that UVs had to be sent (and retained) in GPU memory too.

There are two ways to achieve this in OpenGL. One way consists in creating several VBOs so that each buffer holds an attribute of the vertex. Position data is stored in its own buffer, texture coordinates are stored their own buffer, per-vertex colors take a third buffer and so on and so forth.

There’s nothing wrong with doing things this way and it definitely works, however, there is one consideration to take into account: when we scatter our vertex data into several buffers, then every frame the GPU will have to collect all this data at render time as part of vertex processing.

I am personally not a big fan of this approach. I prefer interleaving the data myself once and then sending it to the video card in a way that’s already prepared for rendering. OpenGL is pretty flexible in this regard and it lets you interleave all the data you need in any format you may choose.

I’ve chosen to store position data first, then texture coordinate data and, finally, color data (XYZUVRGBA). The retained mesh class will be responsible for tightly packing vertex data into this format.

Once data is copied over to video memory, setting up the vertex attrib pointers can be a little tricky. This is where interleaved arrays become a more challenging than separate attribute buffers. An error here will cause the video card to read garbage memory and possibly segfault in the GPU. This is not a good idea. I’ve seen errors like this bring down entire operating systems and it’s not a pretty picture.

A sheet of paper will help calculate the byte offsets. It’s important to write down the logic and then manually test it using pen and paper. The video driver must never read outside the interleaved array.

Once ready, OpenGL will take care of feeding the vertex data into our shader inputs, where we will interpolate the texture coordinates and successfully sample the bound texture, as shown in the image above.

Now that we have a working foundation where we can develop custom shaders for any Entity in the scene it’s time to start cleaning up resource management. Stay tuned for more!

# Basic Entity Instancing and Positioning

This week, I’ve continued working on the Editor, adding Entity instancing through the UI and (simple) position editing.

An animated gif showing very basic entity creation and positioning in the Vortex Editor.

The image above features a rough preview that helps show how any number of Entities can be created now. The next steps will be to refine this to add Mouse-based selection, as well as a more comprehensive UI that allows for finer control over Entity position, rotation and scale.

Once these and a few more Editor features are in, I will create a higher quality video showcasing the creation of an environment by means of Entity editing.

Stay tuned for more!

# Goodbye Hypr3D

It seems the sun has set on the good old Hypr3D app. This was the first commercial app to use a licensed copy of my 3D Engine back in 2011.

 Hypr3D’s help page showing the viewer’s recognized gestures. Hypr3D rendering a reconstructed pineapple model on an iPhone.

Back in the day, the Vortex Engine was on its 1.0 version and it was pretty much done when the App was developed. Although this made using it almost a “plug and play” experience, there were some interesting problems that had to be resolved during the App development process.

On problem in particular was how to integrate a HTML UI with the Objective-C Front Controller of the App. This was way before any of the modern “hybrid app” toolchains existed and I remember I was on a trip to Seattle when I started to lay down the main concepts.

Goodbye Hypr3D, I’ll always remember the countless hours staring at the cookies reference model while debugging!

Hypr3D’s model viewed rendering a reconstructed model featuring a pile of rocks on an iPad

I’ve created a dedicated page in the memory of Hypr3D with a short feature list and some more screenshots. You can visit it here.

Although Hypr3D may not be available anymore, Vortex Engine 1.0 is anything but dead. Stay tuned ;)

# Bump Mapping a Simple Surface

In my last post, I started discussing Bump Mapping and showed a mechanism through which we can generate a normal map from any diffuse texture. At the time, I signed off by mentioning next time I would show you how to apply a bump map on a trivial surface. This is what we are going to do today.

In the video above you can see the results of the effect we are trying to achieve. This video was generated from a GIF file created using the Vortex Engine. It clearly shows the dramatic lighting effect bump mapping achieves.

Although it may seem as if this image is composed of a detailed mesh of the boss’ head, it is in fact just two triangles. If you could see the image from the side, you’d see it’s completely flat! The illusion of curvature and depth is generated by applying per-pixel lighting on a bump mapped surface.

How does bump mapping work? Our algorithm will take as input two images: the diffuse map and the bump map. The diffuse map is just the colors of each pixel in the image, whereas the bump map consists in an encoded set of per-pixel normals that we will use to affect our lighting equation.

Here’s the diffuse map of the Boss door:

Diffuse map of the boss door. Image taken from the Duke3D HRP project.

And here’s the bump map:

Bump map of the boss door. Image taken from the Duke3D HRP project.

I’m using these images taken from the excellent Duke3D High Resolution Pack (HRP) for educational purposes. Although we could’ve generated the bump map using the technique from my previous post, this especially-tailored bump map will provide better results.

Believe it or not, there are no more input textures used! The final image was produced by applying the technique on these two. This is the reason I think bump mapping is such a game changer. This technique alone can significantly up the quality and realism of the images our renderers produce.

It is especially shocking when we compare the diffuse map with the final bump-mapped image. Even if we applied per-pixel lighting to the diffuse map in our rendering pipeline, the results would be nowhere close to what we can achieve with bump mapping. Bump mapping really makes this door “pop out” of its surface.

Bump Mapping Theory

The theory I develop in these sections is heavily based on the books Mathematics for 3D Game Programming and Computer Graphics from Eric Lengyel and More OpenGL from David Astle. You should check those books for the definitive reference on bump mapping. Here, I try to explain the concepts in simple terms.

So far, you might have noticed I’ve been mentioning that the bump map consists of the “deformed” normals that we should use when applying the lighting equation to the scene. But I haven’t mentioned how these normals are actually introduced into our lighting equations.

Remember from my previous post how we mentioned that normals are stored in the RGB image? Remember that normals close to (0,0,1) looked blueish? Well, that is because normals are stored in a coordinate system that corresponds to the image. This means that, unfortunately, we can’t just take each normal N and plug it into our lighting equation. If we call L the vector that takes each 3D point (corresponding to each fragment) to the light source, the problem here is that L and N are in different coordinate systems.

L is, of course, in camera or world space, depending on where you like doing your lighting math. But where is N? N is defined in terms of the image. That’s neither of those spaces.

Where is it then? Well, N is actually in its own coordinate system that authors refer to as “tangent space”. It’s its own coordinate system.

In order to apply per-pixel lighting using the normals coming from the bump map, we’ll have to bring all vectors to the same coordinate system. For bump mapping, we usually bring the L vector into tangent space instead of bringing all the normals back into camera/world space. It seems more convenient and should produce the same results.

Once L has been transformed, we will retrieve N from the bump map and use the Lambert equation between these two to calculate the light intensity at the fragment.

From World Space to Tangent Space

How can we convert from camera space to tangent space? Tangent space is not by itself defined in terms of anything that we can map to our mesh. So, we will have to use one additional piece of information to determine the relationship between these two spaces.

Given that our meshes are composed of triangles, we will assume the bump map is to be mapped on top of each triangle. The orientation will be given by the direction of the texture coordinates of the vertices that comprise the triangle.

This means that if we have a triangle that has an edge: (-1,1)(1,1) with texture coordinates: (0,1)(1,1), a horizontal vector (1,0) represents a vector tangent to the vertices that is aligned with the horizontal texture coordinates. We will call this the tangent.

Now, we need two more vectors in order to define the coordinate system. Well, the other vector we can use is the normal of the triangle. This vector is, by definition, perpendicular to the surface and will be perpendicular to the tangent.

The final vector we will use to define the coordinate system has to be perpendicular to both, the normal and the tangent, so we can calculate it using a cross product. There is an ongoing debate whether this vector should be called the “bitangent” or the “binormal” vector. According to Eric Lengyel the term “binormal” makes no sense from a mathematical standpoint, so we will refer to it as the “bitangent”.

Now that we have three vectors that define the tangent space, we can create a transformation matrix that takes vector L and puts it in the same coordinate system that the normals for that specific triangle. Doing this for every triangle will allow applying bump mapping on the triangle.

Responsibility – who does what

Although we can compute the bitangent and the transform matrix in our vertex shader, we will have to supply the tangent vectors as input to our shader program. Tangent vectors need to be calculated using the CPU, but (thankfully) only once. Once we have them, we supply them as an additional vertex array.

Calculating the tangent vectors is trivial for a simple surface like our door, but can become very tricky for an arbitrary mesh. The book Mathematics for 3D Game Programming and Computer Graphics provides a very convenient algorithm to do so, and is widely cited in other books and the web.

For our door, composed of the vertices:

(-1.0, -1.0, 0.0)
(1.0, -1.0, 0.0)
(1.0, 1.0, 0.0)
(-1.0, 1.0, 0.0)


Tangents will be:

(1.0, 0.0, 0.0)
(1.0, 0.0, 0.0)
(1.0, 0.0, 0.0)
(1.0, 0.0, 0.0)


Once we have the bitangents and the transformation matrix, we rotate L in the vertex shader, and pass it down to the fragment shader as a varying attribute, interpolating it over the surface of the triangle.

Our fragment shader can just take L, retrieve (and decode) N from the bump map texture and apply the Lambert equation on both of them. The rest of the fragment shading algorithm need not be changed if we are not applying specular highlights.

In Conclusion

Bump mapping is an awesome technique that greatly improves the lighting in our renderers at limited additional costs. Its implementation is not without a challenge, however.

Here are the steps necessary for applying the technique:

• After loading the geometry, compute the per-vertex tangent vectors in the CPU.
• Pass down the per-vertex tangents as an additional vertex array to the shader, along with the normal and other attributes.
• In the vertex shader, compute the bitangent vector.
• Compute the transformation matrix.
• Compute vector L and transform it into tangent space using this matrix.
• Interpolate L over the triangle as part of the rasterization process.
• In the fragment shader, normalize L.
• Retrieve the normal from the bump map by sampling the texture. Decode the RGBA values into a vector.
• Apply the Lambert equation using N and the normalized L.
• Finish shading the triangle as usual.

On the bright side, since this technique doesn’t require any additional shading stages, it can be implemented in both OpenGL and OpenGL ES 2.0 and run on most of today’s mobile devices.

In my next post I will show bump mapping applied to a 3D model. Stay tuned!

# Bump Map Generation

Bump mapping is a texture-based technique that allows improving the lighting model of a 3D renderer. I’m a big fan of bump mapping; I think it’s a great way to really make the graphics of a renderer pop at no additional geometry processing cost.

Bump mapping example. Notice the improved illusion of depth generated by the technique. Image taken from http://3dmodeling4business.com

Much has been written about this technique, as it’s widely used in lots of popular games. The basic idea is to perturb normals used for lighting at the per-pixel level, in order to provide additional shading cues to the eye.

The beauty of this technique is that it doesn’t require any additional geometry for the model, just a new texture map containing the perturbed normals.

This post covers the topic of bump map generation, taking as input nothing but a diffuse texture. It is based on the techniques described in the books “More OpenGL” by Dave Astle and “Mathematics for 3D Games And Computer Graphics” by Eric Lengyel.

Let’s get started! Here’s the Imp texture that I normally use in my examples. You might remember the Imp from my Shadow Mapping on iPad post.

Diffuse texture map of the Imp model.

The idea is to generate the bump map from this texture. In order to do this, what we are going to do is analyze the diffuse map as if it were a heightmap that describes a surface. Under this assumption, the bump map will be composed of the surface normals at each point (pixel).

So, the question is, how do we obtain a heightmap from the diffuse texture? We will cheat. We will convert the image to grayscale and hope for the best. At least this way we will be taking into account the contribution of each color channel for each pixel we process.

Let’s call H the heightmap and D the diffuse map. Converting an image to grayscale can be easily done programatically using the following equation:

$\forall (i,j) \in [0..width(D), 0..height(D)], H_{i,j} = red(D_{i,j}) * 0.33 + green(D_{i,j})* 0.66 + blue(D_{i,j}) * 0.11$

As we apply this formula to every pixel, we obtain a grayscale image (our heightmap), shown in the next figure:

A grayscale conversion of the Imp diffuse texture.

Now that we have our heightmap, we will study how the grayscale colors vary in the horizontal $s$ and in the vertical $t$ directions . This is a very rough approximation of the surface derivative at the point and will allow approximating the normal later.

If $H_{i,j}$ is the grayscale value stored in the heightmap at the point $(i,j)$, then we approximate the derivatives $s$ and $t$ like so:

$s_{i,j} = (1, 0, H_{i+1,j}-H_{i-1,j}) \\ t_{i,j} = (0, 1, H_{i, j+1}-H_{i,j-1})$

$s$ and $t$ are two vectors perpendicular to the heightmap at point $(i,j)$. What we can now do is take their cross product to find a vector perpendicular to both. This vector will be the normal of the surface at point $(i,j)$ and is, therefore, the vector we were looking for. We will store it in the bump map texture.

$N = \frac{s \times t}{||s \times t||}$

After applying this logic to the entire heightmap, we obtain our bump map.

We must be careful when storing a normalized vector in a texture. Because vector components will be in the [-1,1] range, but values we can store in the bitmap need to be in the [0, 255] range, we will have to convert between both value ranges to store our data as color.

A linear conversion produces an image like the following:

Bump map generated from the Imp’s diffuse map, ready to be fed into the video card.

Notice the prominence of blue, which represents normals close to the (unperturbed) $(0,0,1)$ vector. Vertical normals end up being stored as blueish colors after the linear conversion.

We are a bit more interested in the darker areas, however. This is where the normals are more perturbed and will make the Phong equation subtly affect shading, expressing “discontinuities” in the surface that the eye will interpret as “wrinkles”.

Other colors will end up looking like slopes and/or curves.

In all fairness, the image is a bit more grainy than I would’ve liked. We can apply a bilinear filter on it to make it smoother. We could also apply a scale to the $s$ and $t$ vectors to control how steep calculated normals will be.

However, since we are going to be interpolating rotated vectors during the rasterization process, these images will be good enough for now.

I’ve written a short Python script that implements this logic and applies it on any diffuse map. It is now part of the Vortex Engine toolset.

In my next post I’m going to discuss how to implement the vertex and fragment shaders necessary to apply bump mapping on a trivial surface. Stay tuned!