Component Introspection

This week, work on the scripting interface continued. As the image below shows, I can now access an entity’s components and even drill down to its Material through the Lua console.

Introspecting a Render Component to access its material via the Lua interface.

Introspecting a Render Component to access its material via the Lua interface.

The image above shows an example of the scripting interface for entities and components. Here, we are creating a new Entity from the builtin Box primitive and then finding its first component of type “1”. Type 1 is an alias to the Render Component of the Entity. It is responsible for binding a Mesh and a Material together.

Once we have the Render Component, we use it to access its Material property and print its memory address.

As the image shows, although Lua allows for very flexible duck typing, I am performing type checking behind the scenes to make scripting mistakes obvious to the user. Allow me to elaborate:

For the interface, I’ve decided that all components will be hidden behind the vtx.Component “class”. Now, this class will be responsible to exposing the interface to all native component methods, such as get_material(), set_mesh(), get_transform() and so forth.

The problem is, how do we prevent trying to access the material property of a Component that doesn’t have one, such as the Md2AnimationComponent? In my mind, there are two ways. I’m going to call them the “JavaScript” way and the “Python” way.

In the JavaScript way, we don’t really care. We allow calling any method on any component and silently fail when a mismatch is detected. We may return nil or “undefined”, but at no point are we raising an error.

In the Python way, we will perform a sanity check before actually invoking the function on the component, and actually halt the operation when an error is found. You can see that in the example above. Here we’re purposefully attempting to get the material of a Base Component (component type 0), which doesn’t have one. In this case, the Engine detects the inconsistency and raises an error.

I feel the Python way is the way to go to prevent subtle hard-to-debug errors arising from allowing any method to be called on any component and happily carrying on through to -hopefully- reach some sort of result.

A third alternative would have been to actually expose a separate “class” for every component type. This would certainly work, but I’m concerned about a potential “class explosion”, as we continue to add more and more components to the Engine. Furthermore, I feel strongly typed duck typing is a good approach, well in tune with the language philosophy, for a language like Lua.

Now that we can drill all the way down to an Entity’s material, it’s time to expand the interface to allow setting the shader and the material properties, allowing the script developer to control how entities are rendered by the Engine. Stay tuned for more!

Binding C++ to Lua

For the last couple of weeks, a lot of work has been going into developing the scripting API that the engine is exposing to Lua. Embedding a scripting language into a large C++ codebase has been a very interesting experience and I’ve been able to experience first hand why Lua is regarded as such a strong scripting language.

Introspection of an Entity from a Lua script running in the Console.

Introspection of an Entity from a Lua script running in the Console.

Lua offers a myriad of ways we can develop a scripting interface for our native code.

A naïve approach would be to expose every function of the engine in the global namespace and have scripts use these directly. Although this method would certainly work, we want to offer an object-oriented API to the engine and its different components, so a more elaborate solution is required.

I ultimately decided to build the interface from scratch, following the Lua concepts of tables and metatables. The reason being that building everything myself would allow me to clearly see the costs of the binding as objects are passed back and forth. This will help keep an eye on performance.

Initial Test of the Lua-C++ binding. In this example, we query and rename an Entity.

Initial Test of the Lua-C++ binding. In this example, we query and rename an Entity.

In order to keep the global namespace as clean as possible, the idea was to create a Lua Table procedurally from the C++ side where all functions and types would live. Conceptually, this table is our namespace, so I named it vtx, accordingly. It’s really the only global variable that the engine registers.

The next step was to start populating the vtx namespace. Two functions I know I wanted to expose right away were Instantiate and Find:

vtx.instantiate( string )
-- Create a new entity corresponding to the identifier passed. 
-- Return the new entity created.

vtx.find_first_entity_by_name( string )
-- Find the first entity in the scene that matches the name specified.
-- Return the entity or nil if no matches are found.

We now have functions. But how do we do objects? And how do we expose our vtx::Entity objects to Lua?

Let’s recap a bit. We know that entities are “engine objects”, in the sense that they live in C++ and their lifecycles are managed by the Vortex Engine. What we want is to provide a lightweight object that Lua can interact with, but when push comes to shove, the native side will be able to leverage the full C++ interface of the engine.

Lua offers the concept of a metatable that helps achieve this. Metatables are can be associated to any table to provide special semantics to them. One special semantic we are interested in is the __index property, which allows implementing the Prototype design pattern.

I won’t go into details of the the Prototype design pattern works, but suffice it to say that whenever a function is called on a table, and the table does not have an implementation for it, the prototype will be responsible to service it.

This is exactly what we want. What we can do then is wrap our vtx::Entity instances in Lua tables and provide a common metatable to all of them that we implement in the C++ side. Even better, because of this approach Lua will take care of passing the Entity Table we are operating on as the first parameter to every function call. We can use this as the “this” object for the method.

Putting it all together, let’s walk over how entities expose the vtx::Entity::setName() function to Lua:

  1. From the native side, we create a metatable. Call it vtx.Entity.
  2. We register in this metatable a C++ function that receives a table and a string and can set the name of a native Entity. We assign it to the “set_name” property of the metatable.
  3. Whenever a script requests an Entity (instantiate, find), the function servicing the call will:
    1. Create a new table.
    2. Set the table’s metatable to vtx.Entity.
    3. Store a pointer to the C++ Entity in it.
  4. When a script invokes the Entity’s set_name function, it will trigger a lookup into the metatable’s functions.
  5. The function we registered under set_name will be called. We are now back in C++.
  6. The native function will pop from the stack a string (the new name) and the “Entity” on which the method was called.
  7. We reinterpret_cast the Entity Table’s stored pointer as a vtx::Entity pointer and call our normal setName() function, passing down the string.

Et voila. That is everything. The second image above shows in the console log how all this looks to a Lua script. At no point must the script developer know that the logic flow is jumping between Lua and C++ as her program executes.

We can also see in the screenshot how the Editor’s entity list picks up the name change. This shows how we are actually altering the real engine objects and not some mock Lua clone.

As I mentioned in the beginning of this post, developing a Lua binding for a large C++ codebase from scratch is a lot of fun. I will continue adding more functionality over the coming weeks and then we’re going to be ready to go back and revisit scene serialization.

Stay tuned for more!

Deferred Rendering – Part I

This week, work went into completing the initial implementation of a Deferred Renderer for Vortex. Every feature we had been incorporating into the Engine so far was building up to this moment and so, this time around, I finished writing all the components necessary to allow rendering geometry to the G-Buffer and then doing a simple light pass.

Initial implementation of deferred rendering in the Vortex V3 Engine.

Initial implementation of deferred rendering in the Vortex V3 Engine.

The image above shows the current functionality. Here, a box is created and its material is set to the “Geometry Pass” built-in shader. Other engines usually refer to this shader with another name, but ultimately, the purpose is always the same: populate the G-Buffer with data.

Once we’ve assigned the shader to the material, we attach a diffuse texture to it. This will be the object’s albedo value that we will write into the G-Buffer together with the corresponding position and normal data. Multi-Render Target support in Vortex is fundamental to efficiently fill the G-Buffer in only one render pass.

Next, we create a light entity. We mentioned the new light interface in Vortex from February. Light entities are gathered by the V3 renderer and, depending on their type, they are drawn directly on the framebuffer. The light shader is selected depending on the light type and it will access the readily available G-Buffer data to shade the scene.

The Postprocessing Underpinnings in the new renderer turned out to be essential for testing new ideas and debugging the implementation process every step of the way.

I’m quite happy with the results. Now that we have a complete vertical slice of the deferred renderer, we can iteratively add new features on top of it to expand its capabilities. From the top of my head, adding support for more light types and transparent meshes are two major features that I want to tackle in the upcoming weeks.

G-Buffer

This week I started working on the G-Buffer implementation for the deferred renderer.

A test of the G-Buffer. Colors correspond to the coordinates of the vertices in world space.

A test of the G-Buffer. Colors correspond to the coordinates of the vertices in world space.

The G-Buffer pass consists in the first half of any deferred shading algorithm. The idea is that, instead of drawing shaded pixels directly on the screen, we will store geometric information for all our opaque objects in a “Geometry Buffer” (G-Buffer for short). This information will be used later in a lighting pass.

In sharp contrast to forward rendering, where the output of our rendering is the already-lit pixels, lighting calculations here are deferred to a later pass, hence the term deferred rendering.

The image above shows a simple test for the G-Buffer implementation in Vortex. In this test, we draw a number of boxes, with each vertex being colored according to their position in world space as read from the G-Buffer.

Notice how as vertices move left to right, we gain red (x translates to red). Similarly, as vertices move from bottom to top, we gain green (y translates to green). We cannot see it in this picture, but as vertices move from the back to the front, we also gain blue (z translates to blue).

Through this test we can also verify that moving the camera does not change the colors at all. This helps validate that our position data are indeed in world space.

This is going to be all for today. Next week, work will continue in the G-Buffer implementation. Stay tuned for more!

Multi-Render Targets

This week, work went into testing Multi-Render Target support. Usually abbreviated MRT, Multi-Render Targets allow our shaders’ output to be written to more than one texture in a single render pass.

A test of Multi-Render Targets in Vortex 3. Texels from the left half of the box come from a texture, whereas texels from the right half come from another.

A test of Multi-Render Targets in Vortex 3. Texels from the left half of the box come from a texture, whereas texels from the right half come from another.

MRT is the foundation of every modern renderer, as it allows building complex visuals without requiring several passes over the scene. With MRT, we can simply specify all the textures that a render pass will be writing to and then, from the shader code, write to each out variable.

MRT is standard in both Core OpenGL and OpenGL ES 3.0, so opting into it will not preclude the renderer from running on mobile hardware. There were only a couple of minor adjustments that had to be done to our current shaders in order to support MRT.

In particular, OpenGL 3.3 changed the way that fragment shaders write to multiple attachments. Prior to OpenGL 3.3, we would write to the GLSL built-in variable glFragData[i] to specify the output we were writing to. Starting in OpenGL 3.3, we explicitly specify the layout description for our out variables in the fragment shader using the seemingly weird syntax: layout(location = i) out vec3 attachment_i; and then writing to that variable directly.

In order to achieve this, we had to increase the GLSL version to 330. There was the option to stay at version 150 and use a GLSL extension, but we are trying to stick to standard out-of-the-box OpenGL as much as possible, so this was not an option.

A test of the Multi-Render Target functionality in Core OpenGL 3.3

A test of the Multi-Render Target functionality in Core OpenGL 3.3

In order to test MRT I designed a simple test. The output can be seen above. In these images, the shader used to render the box writes the red and green channels to two different texture attachments.

The blit pass that draws the framebuffer to the screen samples both textures and uses pixels coming from the red texture to the left half of the screen and pixels from the green texture for the right half. This generates the visual effect of the box being painted in two colors.

I’m very happy with the results of this test. MRT is a very approachable feature and there is no reason not to use it if you are targeting recent hardware.

The next steps will be to clean up the internal Framebuffer API even more to make MRT support more flexible, and to start working on implementing the G-Buffer. As usual, stay tuned for more!

Quick Procedural Geometry Notes

This past week was crazy busy and I didn’t have any time to sit down and code. Nonetheless, I had the time to work out the math and jot down a quick piece of pseudocode on how to procedurally generate a cone.

Math and Pseudocode for generating a cone procedurally.

Math and Pseudocode for generating a cone procedurally.

Here, I’ve chosen to place the cone’s center at (0,0,0) in Object Space. This will allow me to always rotate the cone on its apex, something that will be extremely helpful for the effect we’re trying to achieve.

I haven’t had the time to test this idea in code yet, however, the plan is to build it into Vortex as part of the vtx::procedural package.

I’m not giving away any more details this week! You’ll have to stay tuned for more!

Wrapping up 2016: Lessons Learned from working on Vortex

2016 marks a year where a lot of work went into both the Vortex Engine and the Vortex Editor. Working on both these projects during my free time has been great to continue to hone my C++ and OpenGL skills. In this post, I am going to do a quick retrospective on the work done and present a few lessons learned.

Lessons on making such an UI-heavy application

Rendering a deserialized scene using the new renderer.

Rendering a deserialized scene using the new renderer.

The way I decided to approach this work was to divide Vortex into two separate projects: the Vortex Engine project and a brand new Editor built on top of it. The Engine itself has been on-going since 2010.

Today, both projects have reached a level of maturity where there we could clearly have two engineers working full-time on them.

The Editor is definitely one of the driving forces that pushes the Engine into supporting more and more visual features. This is to provide the user more power of expression. The amount of work required for exposing these features to the outside world, however, is something that I did not expect.

Let’s walk through a simple example: selecting an Entity and editing it so we can change its properties. In order to do this we must:

  1. Provide the user a means to select an Entity.
  2. React to this, introspecting the selected Entity and its components.
  3. Build and display a custom UI for each component in the Entity.
  4. For each component, render its UI widgets and preload them the component’s UI.
  5. Provide the user the means to change these properties.
  6. Have the changes reflect in the 3D scene immediately.

This system can quickly grow into thousands of lines of code. Even if the code does not have the strenuous performance requirements of the rendering loop, we still need to develop responsive code with a good architecture that allows building more features on top of it.

The rewards from this effort are huge, however. The Editor UI is the main point of interaction of the user with Vortex and it’s the way that she can tell the Engine what she wants it to do. Good UI and, more importantly, good UX, are key in making the Editor enjoyable to the user.

Lessons on going with C++11

C++ Logo

I decided to finally do the jump and update the codebase from C++98 to C++11 and the min-spec for running the renderer from OpenGL ES 2.0 to Core OpenGL 3.3.

Going to C++11 was the right choice. I find that C++11 allows for more power of expression when developing a large C++ codebase and it provides several utilities for making the code look more clean.

There are a few takeaways from using C++11, however, that I think may not be as clear for people just getting started with this version of the language.

Lessons on C++11 enum classes

I like enum classes a lot and I tend to use them as much as possible. There were several places through the legacy Vortex Engine code where old C-style structs and/or static const values that were used to declare configuration parameters did not look too clean. C++ enum classes helped wrap these while also keeping their enclosing namespace clean.

The only limitation I found was using enum classes for bitmasks. enum class members can definitely be cast to make them behave as expected. Doing this however is heavy-handing the type system and some may argue it does away with the advantages of having it.

Additionally, if you’re trying to implicitly cast a binary operator expression involving an enum class value into a bool, you are going to find a roadblock:

I like doing if (mask & kParameterFlag), as I find it more clear to read than having a mandatory comparison against zero at the end, and C++11 enum classes do not provide that option for me.

Lessons on C++11 Weak Pointers

C++11 Shared Pointers (std::shared_ptr and std::weak_ptr) are great for an application like the Editor, where Entity references (pointers) need to be passed around.

Imagine this situation: we have an Entity with a few components that is selected. We have several UI components that are holding pointers to all these objects. Now, if the user decides to delete this entity or remove one of its components, how do we prevent the UI code from dereferencing dangling pointers? We would have to hunt down all references and null them.

Using C++11’s std::weak_ptr, we know that a pointer to a destroyed Entity or Component will fail to lock. This means that the UI controllers can detect situations where they are pointing to deleted data and graciously handle it.

Lessons on C++11 Shared Pointers

Like any other C++ object, smart pointers passed by value will be copied. Unlike copying a raw pointer, however, copying a smart pointer is an expensive operation.

Smart pointers need to keep a reference count to know when it’s okay to delete the managed object and, additionally, C++ mandates that this bookkeeping be performed in a thread-safe fashion. This means that a mutex will have to be locked and unlocked for every smart pointer we create and destroy.

If not careful, your CPU cycles will be spent copying pointers around and you may have a hard time when you try to scale your Engine down to run on a mobile device.

I investigated this issue and found this amazing write up by Herb Sutter: https://herbsutter.com/2013/06/05/gotw-91-solution-smart-pointer-parameters/. The idea is to avoid passing shared pointers by copy to any function that does not intend to keep a long-term reference to the object.

Don’t be afraid of calling std::shared_ptr::get() for passing a pointer to a short, pure function that performs temporary work and only opt into smart pointers when you want to signal to the outside world that you want to share the ownership of the passed in pointer.

Lessons on using Core OpenGL

OpenGL Logo. Copyright (C) Khronos Group.

OpenGL Logo. Copyright (C) Khronos Group.

Choosing to go for a specific minimum version of Core OpenGL helps root out all the questions that pop up every time you use anything outside OpenGL 1.1 and wonder if you should implement the ARB / EXT variants as well.

Core OpenGL 3.3 makes it easier to discover the optimal GPU usage path, as you are now required to use VBOs, VAOs, Shaders and other modern Video Card constructs. It also has the added advantage that it will make it so that legacy OpenGL calls (which should not be used anyways) will not work.

Differences in OpenGL, however, are still pervasive enough so that code that is tested and verified to work on Windows may not work at all on OSX due to differences in the video driver. Moving the codebase to a mobile device will again prove a challenge.

The lesson here is to never assume that your OpenGL code works. Always test on all the platforms you claim to support.

In Closing

These were some of the most prominent lessons learned from working on Vortex this year. I am looking forward to continuing to work on these projects through 2017!

I think we’ve only scratched the surface of what the new renderer’s architecture can help build and I definitely want to continue developing the renderer to support more immersive experiences, as well as the Editor so it exposes even more features to the user!

Thank you for joining in though the year and, as usual, stay tuned for more!

Shader Texture Mapping and Interleaved Arrays

Work continues on the new Rendering System for the Vortex Engine.

This week was all about implementing basic texture mapping using GL Core. The following image shows our familiar box, only this time, it’s being textured from a shader.

Texture Mapping via Interleaved Arrays and Shaders in the Vortex Engine.

Texture Mapping via Interleaved Arrays and Shaders in the Vortex Engine.

A number of changes had to go into the Renderer in order to perform texture mapping. These touched almost all the layers of the Engine and Editor.

  1. First, I wrote a “Single Texture” shader in GLSL to perform the perspective transform of the Entity’s mesh, interpolate its UV texture coordinates and sample a texture.
  2. Second, I had to change the way the retained mesh works in order to be able to send texture coordinate data to the video card (more on this later).
  3. Finally, I had to modify the Editor UI to allow selecting which shader is to be used when rendering an Entity.

So, regarding how to submit mesh texture coordinate data to the video card, because in OpenGL Core we use Vertex Buffer Objects (VBOs), it was clear that UVs had to be sent (and retained) in GPU memory too.

There are two ways to achieve this in OpenGL. One way consists in creating several VBOs so that each buffer holds an attribute of the vertex. Position data is stored in its own buffer, texture coordinates are stored their own buffer, per-vertex colors take a third buffer and so on and so forth.

There’s nothing wrong with doing things this way and it definitely works, however, there is one consideration to take into account: when we scatter our vertex data into several buffers, then every frame the GPU will have to collect all this data at render time as part of vertex processing.

I am personally not a big fan of this approach. I prefer interleaving the data myself once and then sending it to the video card in a way that’s already prepared for rendering. OpenGL is pretty flexible in this regard and it lets you interleave all the data you need in any format you may choose.

I’ve chosen to store position data first, then texture coordinate data and, finally, color data (XYZUVRGBA). The retained mesh class will be responsible for tightly packing vertex data into this format.

Once data is copied over to video memory, setting up the vertex attrib pointers can be a little tricky. This is where interleaved arrays become a more challenging than separate attribute buffers. An error here will cause the video card to read garbage memory and possibly segfault in the GPU. This is not a good idea. I’ve seen errors like this bring down entire operating systems and it’s not a pretty picture.

A sheet of paper will help calculate the byte offsets. It’s important to write down the logic and then manually test it using pen and paper. The video driver must never read outside the interleaved array.

Once ready, OpenGL will take care of feeding the vertex data into our shader inputs, where we will interpolate the texture coordinates and successfully sample the bound texture, as shown in the image above.

Now that we have a working foundation where we can develop custom shaders for any Entity in the scene it’s time to start cleaning up resource management. Stay tuned for more!

Basic Entity Instancing and Positioning

This week, I’ve continued working on the Editor, adding Entity instancing through the UI and (simple) position editing.

An animated gif showing very basic entity creation and positioning in the Vortex Editor.

An animated gif showing very basic entity creation and positioning in the Vortex Editor.

The image above features a rough preview that helps show how any number of Entities can be created now. The next steps will be to refine this to add Mouse-based selection, as well as a more comprehensive UI that allows for finer control over Entity position, rotation and scale.

Once these and a few more Editor features are in, I will create a higher quality video showcasing the creation of an environment by means of Entity editing.

Stay tuned for more!

Goodbye Hypr3D

It seems the sun has set on the good old Hypr3D app. This was the first commercial app to use a licensed copy of my 3D Engine back in 2011.

Hypr3D's help page showing the viewer's recognized gestures.

Hypr3D’s help page showing the viewer’s recognized gestures.

Hypr3D rendering a reconstructed pineapple model on an iPhone.

Hypr3D rendering a reconstructed pineapple model on an iPhone.

Back in the day, the Vortex Engine was on its 1.0 version and it was pretty much done when the App was developed. Although this made using it almost a “plug and play” experience, there were some interesting problems that had to be resolved during the App development process.

On problem in particular was how to integrate a HTML UI with the Objective-C Front Controller of the App. This was way before any of the modern “hybrid app” toolchains existed and I remember I was on a trip to Seattle when I started to lay down the main concepts.

Goodbye Hypr3D, I’ll always remember the countless hours staring at the cookies reference model while debugging!

Hypr3D's model viewed rendering a reconstructed model featuring a pile of rocks on an iPad

Hypr3D’s model viewed rendering a reconstructed model featuring a pile of rocks on an iPad

I’ve created a dedicated page in the memory of Hypr3D with a short feature list and some more screenshots. You can visit it here.

Although Hypr3D may not be available anymore, Vortex Engine 1.0 is anything but dead. Stay tuned ;)