Binding C++ to Lua

For the last couple of weeks, a lot of work has been going into developing the scripting API that the engine is exposing to Lua. Embedding a scripting language into a large C++ codebase has been a very interesting experience and I’ve been able to experience first hand why Lua is regarded as such a strong scripting language.

Introspection of an Entity from a Lua script running in the Console.

Introspection of an Entity from a Lua script running in the Console.

Lua offers a myriad of ways we can develop a scripting interface for our native code.

A naïve approach would be to expose every function of the engine in the global namespace and have scripts use these directly. Although this method would certainly work, we want to offer an object-oriented API to the engine and its different components, so a more elaborate solution is required.

I ultimately decided to build the interface from scratch, following the Lua concepts of tables and metatables. The reason being that building everything myself would allow me to clearly see the costs of the binding as objects are passed back and forth. This will help keep an eye on performance.

Initial Test of the Lua-C++ binding. In this example, we query and rename an Entity.

Initial Test of the Lua-C++ binding. In this example, we query and rename an Entity.

In order to keep the global namespace as clean as possible, the idea was to create a Lua Table procedurally from the C++ side where all functions and types would live. Conceptually, this table is our namespace, so I named it vtx, accordingly. It’s really the only global variable that the engine registers.

The next step was to start populating the vtx namespace. Two functions I know I wanted to expose right away were Instantiate and Find:

vtx.instantiate( string )
-- Create a new entity corresponding to the identifier passed. 
-- Return the new entity created.

vtx.find_first_entity_by_name( string )
-- Find the first entity in the scene that matches the name specified.
-- Return the entity or nil if no matches are found.

We now have functions. But how do we do objects? And how do we expose our vtx::Entity objects to Lua?

Let’s recap a bit. We know that entities are “engine objects”, in the sense that they live in C++ and their lifecycles are managed by the Vortex Engine. What we want is to provide a lightweight object that Lua can interact with, but when push comes to shove, the native side will be able to leverage the full C++ interface of the engine.

Lua offers the concept of a metatable that helps achieve this. Metatables are can be associated to any table to provide special semantics to them. One special semantic we are interested in is the __index property, which allows implementing the Prototype design pattern.

I won’t go into details of the the Prototype design pattern works, but suffice it to say that whenever a function is called on a table, and the table does not have an implementation for it, the prototype will be responsible to service it.

This is exactly what we want. What we can do then is wrap our vtx::Entity instances in Lua tables and provide a common metatable to all of them that we implement in the C++ side. Even better, because of this approach Lua will take care of passing the Entity Table we are operating on as the first parameter to every function call. We can use this as the “this” object for the method.

Putting it all together, let’s walk over how entities expose the vtx::Entity::setName() function to Lua:

  1. From the native side, we create a metatable. Call it vtx.Entity.
  2. We register in this metatable a C++ function that receives a table and a string and can set the name of a native Entity. We assign it to the “set_name” property of the metatable.
  3. Whenever a script requests an Entity (instantiate, find), the function servicing the call will:
    1. Create a new table.
    2. Set the table’s metatable to vtx.Entity.
    3. Store a pointer to the C++ Entity in it.
  4. When a script invokes the Entity’s set_name function, it will trigger a lookup into the metatable’s functions.
  5. The function we registered under set_name will be called. We are now back in C++.
  6. The native function will pop from the stack a string (the new name) and the “Entity” on which the method was called.
  7. We reinterpret_cast the Entity Table’s stored pointer as a vtx::Entity pointer and call our normal setName() function, passing down the string.

Et voila. That is everything. The second image above shows in the console log how all this looks to a Lua script. At no point must the script developer know that the logic flow is jumping between Lua and C++ as her program executes.

We can also see in the screenshot how the Editor’s entity list picks up the name change. This shows how we are actually altering the real engine objects and not some mock Lua clone.

As I mentioned in the beginning of this post, developing a Lua binding for a large C++ codebase from scratch is a lot of fun. I will continue adding more functionality over the coming weeks and then we’re going to be ready to go back and revisit scene serialization.

Stay tuned for more!

Building the Engine Scripting API

Last week when we left off, we were able to implement a Lua REPL in the Vortex Editor console. This week, I wanted to take things further by allowing Lua scripts to access the engine’s state, create new entities and modify their properties.

Scripting Interface to the Engine. A C++ cube entity is instantiated from Lua code that is evaluated on the fly in the console.

Scripting Interface to the Engine. A C++ cube entity is instantiated from Lua code that is evaluated on the fly in the console.

In order to get started, I added a simple single function: vtx_instantiate(). This function is available to Lua, but its actual implementation is provided in native code, in C++. The image above shows how we can use this function to add an entity to the scene from the console.

This simple example allows us to test two important concepts: first, that we can effectively call into C++ from Lua. Second, it shows that we are able to pass in parameters between the two languages. In this case, the single argument expected is a string that specifies which primitive or asset to instantiate.

With this in place, we can now move on to building a more intricate API that enables controlling any aspect of the scene, respond to user input and even implementing an elaborate world simulation.

Best of all, because the Lua VM is embedded into the engine, scripts built against the Vortex API will by definition be portable and run on any platform the engine runs on. This includes, of course, mobile devices.

The idea now is to continue to expand the engine API, developing a rich, easy to use set of functions. API design should prove an interesting exercise. Stay tuned for more!

Adding a Scripting Engine to Vortex

This week I added scripting support to the Engine. I chose to go with Lua because of how easy it is to integrate into existing C/C++ codebases.

Initial integration of a Lua VM in the form of an updated Console

Initial integration of a Lua VM in the form of an updated Console

I’ve mentioned Lua several times before in this blog, but if you’re not familiar with it, it’s a great open source programming language developed at the Catholic University of Rio, Brazil (PUC Rio). It’s very easy to pick up.

Here’s a 10,000 feet view of the language, courtesy of Coffeeghost:

Lua cheatsheet by coffeeghost.

Lua cheatsheet by coffeeghost.

I’ve been interested in adding Lua scripting to the engine for a while now. I finally decided to take the step while I was revisiting serialization and a friend suggested going directly with Lua for the manifest file instead of JSON.

Moving from a “declarative” manifest to an “imperative” one might seem strange, however, it will give me the opportunity to start fleshing out the Lua-to-Engine interface that will later serve engine-wide scripting.

I am very happy with the way things turned out. In the image above you can see how I refactored the Vortex Editor console to now support a full Lua REPL.

Powered by the Lua Engine in Vortex, the console is no longer a place where the engine just prints messages, but rather a true editor shell with a direct interface to the engine. This is similar to what some popular 3D modeling software products do with Python.

I am excited about having Lua scripts as first class citizens in the engine. Expect to see much more Lua in this blog in the upcoming months!

Stay tuned for more!

Deferred Rendering – Part II: Handling Normals

This week work went into adding normal data to the G-Buffer and Light passes. I left normal data out while I was still implementing the architecture for the deferred renderer, so in the previous post we were only doing ambient lighting.

The Stanford Dragon 3D model rendered with a single directional light by the new deferred renderer.

The Stanford Dragon 3D model rendered with a single directional light by the new deferred renderer.

Adding normal data required a few tweaks to several components of the renderer.

First, I had to extend my Vertex Buffers to accommodate an extra 3 floating point values (the normal data). If you’ve been following these post series, you’ll remember from this post that we designed our Interleaved Arrays with the format XYZUVRGBA. I’ve now expanded this format to include per-vertex normals. The new packing is: XYZNNNUVRGBA, where NNN denotes the normal x, y and z values.

Next, the G-Buffer had to be extended to include a third render target: a normal texture. Here is where we stored interpolated normal data calculated during the Geometry pass.

Finally, we extend the Light pass shader to take in the normal texture and sample the per-fragment world-space normal and use it in its light calculation.

The test box renderer with a hard directional light with direction (-1,-1,0).

The test box renderer with a hard directional light with direction (-1,-1,0).

For testing, I used our usual textured box model and directional light with a rotating direction, as depicted above. The box is great for these kinds of tests, as its faces are parallel to the Ox, Oy and Oz planes with opposite value normals.

Soon enough, the test revealed a problem: faces with negative normals where not being lit. Drawing the normal data to the framebuffer during the light pass helped narrow down the problem tremendously. It turns out, normal data sampled from the normal texture was clamped to the [0-1] range. This meant that we could not represent normals with negative components.

Going through the OpenGL documentation for floating point textures revealed the problem. According to the docs, if the data type of a texture ends up being defined as fixed-point, sampled data will be clamped. The problem was caused by a single incorrect parameter in the glTexture2D calls used to create the render textures: the data type was being set to GL_UNSIGNED_BYTE for floating point textures.

The fix was simple enough: set the data type as GL_FLOAT for floating point textures, even if the internal format is already floating-point.

Our old forktruck scene, loaded into the Editor and rendered using the first iteration of the deferred renderer.

Our old forktruck scene, loaded into the Editor and rendered using the first iteration of the deferred renderer.

With this fix in place, I could confirm that we are able to light a surface from any direction from our simple Light pass. The image above shows our old warehouse scene loaded by Editor, and the forktruck is being lit by the deferred renderer!

There are now several paths where work can continue: we can extend our serialization format to include material data, we can continue improving the visuals or we can start testing our code on Mac. Stay tuned for more!

Deferred Rendering – Part I

This week, work went into completing the initial implementation of a Deferred Renderer for Vortex. Every feature we had been incorporating into the Engine so far was building up to this moment and so, this time around, I finished writing all the components necessary to allow rendering geometry to the G-Buffer and then doing a simple light pass.

Initial implementation of deferred rendering in the Vortex V3 Engine.

Initial implementation of deferred rendering in the Vortex V3 Engine.

The image above shows the current functionality. Here, a box is created and its material is set to the “Geometry Pass” built-in shader. Other engines usually refer to this shader with another name, but ultimately, the purpose is always the same: populate the G-Buffer with data.

Once we’ve assigned the shader to the material, we attach a diffuse texture to it. This will be the object’s albedo value that we will write into the G-Buffer together with the corresponding position and normal data. Multi-Render Target support in Vortex is fundamental to efficiently fill the G-Buffer in only one render pass.

Next, we create a light entity. We mentioned the new light interface in Vortex from February. Light entities are gathered by the V3 renderer and, depending on their type, they are drawn directly on the framebuffer. The light shader is selected depending on the light type and it will access the readily available G-Buffer data to shade the scene.

The Postprocessing Underpinnings in the new renderer turned out to be essential for testing new ideas and debugging the implementation process every step of the way.

I’m quite happy with the results. Now that we have a complete vertical slice of the deferred renderer, we can iteratively add new features on top of it to expand its capabilities. From the top of my head, adding support for more light types and transparent meshes are two major features that I want to tackle in the upcoming weeks.

G-Buffer

This week I started working on the G-Buffer implementation for the deferred renderer.

A test of the G-Buffer. Colors correspond to the coordinates of the vertices in world space.

A test of the G-Buffer. Colors correspond to the coordinates of the vertices in world space.

The G-Buffer pass consists in the first half of any deferred shading algorithm. The idea is that, instead of drawing shaded pixels directly on the screen, we will store geometric information for all our opaque objects in a “Geometry Buffer” (G-Buffer for short). This information will be used later in a lighting pass.

In sharp contrast to forward rendering, where the output of our rendering is the already-lit pixels, lighting calculations here are deferred to a later pass, hence the term deferred rendering.

The image above shows a simple test for the G-Buffer implementation in Vortex. In this test, we draw a number of boxes, with each vertex being colored according to their position in world space as read from the G-Buffer.

Notice how as vertices move left to right, we gain red (x translates to red). Similarly, as vertices move from bottom to top, we gain green (y translates to green). We cannot see it in this picture, but as vertices move from the back to the front, we also gain blue (z translates to blue).

Through this test we can also verify that moving the camera does not change the colors at all. This helps validate that our position data are indeed in world space.

This is going to be all for today. Next week, work will continue in the G-Buffer implementation. Stay tuned for more!

Multi-Render Targets

This week, work went into testing Multi-Render Target support. Usually abbreviated MRT, Multi-Render Targets allow our shaders’ output to be written to more than one texture in a single render pass.

A test of Multi-Render Targets in Vortex 3. Texels from the left half of the box come from a texture, whereas texels from the right half come from another.

A test of Multi-Render Targets in Vortex 3. Texels from the left half of the box come from a texture, whereas texels from the right half come from another.

MRT is the foundation of every modern renderer, as it allows building complex visuals without requiring several passes over the scene. With MRT, we can simply specify all the textures that a render pass will be writing to and then, from the shader code, write to each out variable.

MRT is standard in both Core OpenGL and OpenGL ES 3.0, so opting into it will not preclude the renderer from running on mobile hardware. There were only a couple of minor adjustments that had to be done to our current shaders in order to support MRT.

In particular, OpenGL 3.3 changed the way that fragment shaders write to multiple attachments. Prior to OpenGL 3.3, we would write to the GLSL built-in variable glFragData[i] to specify the output we were writing to. Starting in OpenGL 3.3, we explicitly specify the layout description for our out variables in the fragment shader using the seemingly weird syntax: layout(location = i) out vec3 attachment_i; and then writing to that variable directly.

In order to achieve this, we had to increase the GLSL version to 330. There was the option to stay at version 150 and use a GLSL extension, but we are trying to stick to standard out-of-the-box OpenGL as much as possible, so this was not an option.

A test of the Multi-Render Target functionality in Core OpenGL 3.3

A test of the Multi-Render Target functionality in Core OpenGL 3.3

In order to test MRT I designed a simple test. The output can be seen above. In these images, the shader used to render the box writes the red and green channels to two different texture attachments.

The blit pass that draws the framebuffer to the screen samples both textures and uses pixels coming from the red texture to the left half of the screen and pixels from the green texture for the right half. This generates the visual effect of the box being painted in two colors.

I’m very happy with the results of this test. MRT is a very approachable feature and there is no reason not to use it if you are targeting recent hardware.

The next steps will be to clean up the internal Framebuffer API even more to make MRT support more flexible, and to start working on implementing the G-Buffer. As usual, stay tuned for more!

Quick Procedural Geometry Notes

This past week was crazy busy and I didn’t have any time to sit down and code. Nonetheless, I had the time to work out the math and jot down a quick piece of pseudocode on how to procedurally generate a cone.

Math and Pseudocode for generating a cone procedurally.

Math and Pseudocode for generating a cone procedurally.

Here, I’ve chosen to place the cone’s center at (0,0,0) in Object Space. This will allow me to always rotate the cone on its apex, something that will be extremely helpful for the effect we’re trying to achieve.

I haven’t had the time to test this idea in code yet, however, the plan is to build it into Vortex as part of the vtx::procedural package.

I’m not giving away any more details this week! You’ll have to stay tuned for more!

New Light Interface

This week, work focused on developing a completely new interface for placing and customizing lights in the scene.

New Light Component Panel in the Vortex Editor.

New Light Component Panel in the Vortex Editor.

The history of Vortex with lights is interesting. In Vortex 1.1, the light system would leverage the fixed pipeline functionality. This meant that any single object in the scene could be lit by up to 8 different lights simultaneously.

In Vortex 2.0, the entire light system was replaced by programmable shaders. This meant that a user could define as many light as she wanted, as long as she created a custom “Visual Effect” that implemented the lighting rig. This was very flexible, but shifted the burden of lighting to the application.

For V3, we are changing the approach again to be able to support multiple dynamic light sources while, at the same time, moving most of the work back into the engine. The plan is to effectively shield the application from implementing the lighting logic and let it focus on just light placement and configuration.

In V3, lights are components that can be attached to entities. Being part of entities means that lights will be able to move around just like any other object in the scene. Being custom components allows bringing in the flexibility of exposing a rich declarative interface for configuring the appearance of the light and how it affects the world.

Under the hood, the new renderer will take care of processing all the lights in a consistent fashion, ensuring that lighting throughout the scene is uniform.

I think we are about to reach the most interesting parts of the new renderer. This is where V3 will really set itself apart from previous iterations of the engine. Stay tuned for more!

Postprocessing Underpinnings

These past few weeks the majority of work went into establishing the underpinnings for frame postprocessing in the new V3 renderer.

A postprocessing effect that renders the framebuffer contents in grayscale.

A postprocessing effect that renders the framebuffer contents in grayscale.

Vortex 2.0 was the first version of the engine to introduce support for custom shaders and, although this opened the door to implement postprocessing effects, the API was cumbersome to use this way.

In general, the process would boil down to having two separate scene graphs and manually controlling the render-to-texture process. This would spill many engine details to user programs and was prone to breaking if the engine changed. This would also mean engine users would have to write hundreds of lines of code.

With V3 I want to make render-to-texture the default render mode. This means the engine will never render directly into the default framebuffer but, rather, we always render to an FBO object that we can then postprocess.

Architecting the renderer this way provides the opportunity to implement a myriad of effects that will up the visuals significantly while also keeping the nitty gritty details hidden under the hood.

The image above shows a postprocessed scene where the framebuffer contents were desaturated while being blit onto the screen, producing a grayscale image.

For the upcoming weeks, work will focus on building upon this functionality to develop the components necessary to support more advanced rendering in V3. Stay tuned for more!