Multiple Directional Realtime Lights

This week work went into supporting multiple directional realtime lights in the new Vortex V3 renderer.

Multiple Realtime Lights rendered by Vortex V3.

Multiple Realtime Lights rendered by Vortex V3.


In the image above, we have a scene composed of several meshes, each with its own material, being affected by three directional lights. The lights have different directions and colors and the final image is a composition of all the color contributions coming from each light.

In order to make the most out of this functionality in the engine, I revamped the Light Component Inspector. It’s now possible to set the direction and color through the UI and see the results affect the scene immediately. You can see the new UI in the screenshot above.

Now, since lights are entities, I considered reusing the entity’s rotation as a way to rotate a predefined vector and thus defining the light. In the end, however, I decided against it. The main reason was that I think it is more clear to explicitly set the direction vector in the UI rather than having the user play with angles in their head to figure out an obscure internal vector. This way, you can specify the vector directly.

I’m pretty happy with the results. Internally, each light is computed individually and then all contributions are additive-blended onto the framebuffer. This means the cost of render n objects affected by m lights is going to be n + m draw calls. This is a big advantage over the forward rendering equivalent, which would require at least n * m draw calls.

Notably missing from the image above is color bleed. Photorealism is addictive: the more your approximate real life, the more you can tell when an image is synthetic if something is missing. This will be a topic for another time however.

Next week I want to make some additions to the material system to make it more powerful, as well as start implementing omnidirectional lights.

Stay tuned for more!

Normal Mapping 2.0

This week we swtiched gears back into Rendering! A lot of work went into building Normal Mapping in the new Renderer. The following image shows the dramatic results:

Normal mapping in the new Deferred Renderer.

Normal mapping in the new Deferred Renderer.

Here, I switch back and forth between the regular Geometry Pass shader and a Normal Mapping-aware shader. Notice how Normal mapping dramatically changes the appearance of the bricks, making them feel less part of a flat surface and more like a real, coarse surface.

I initially discussed Normal Mapping back in 2014, so I definitely recommend you check out that post for more details on how the technique works. The biggest difference in building Normal Mapping in Vortex V3 compared to Vortex 2.0 was implementing it on top of the new Deferred Renderer.

There is more work to be done in terms of Normal Mapping, such as adding specular mapping, but I’m happy with the results so far. Next week we continue working on graphics! Stay tuned for more!

Putting it all together

This week has been a big one in terms of wrangling together several big pillars of the engine to provide wider functionality. The image below shows how we can now dynamically run an external Lua script that modifies the 3D world on the fly:

Vortex loading and running an external script that changes the texture of an entity's material.

Vortex loading and running an external script that changes the texture of an entity’s material.

In the image above, I’ve created two boxes. Both of these have different materials and each material references a different texture.

What you can see I’m doing is that I “mistakenly” drag from the Asset Library a character texture and assign it as the second box’s texture. Oh no! How can we fix this? It’s easy: just run an external script that will assign the first box’s texture to the second!

I’ve pasted the code of the script below:

function get_entity_material( entity )
	--get an entity's material
	local RENDER_COMPONENT_TYPE = 1
	local rendercomp = entity:first_component_of_type( RENDER_COMPONENT_TYPE )
	local material = rendercomp:get_material()
	return material;
end

ent0 = vtx.find_first_entity_by_name("box0")
mat0 = get_entity_material( ent0 )
tex0 = mat0:get_texture( "diffuseTex" )

ent1 = vtx.find_first_entity_by_name("box1")
mat1 = get_entity_material( ent1 )

mat1:set_texture( "diffuseTex", tex0 )

print("done")

As you can see, the script is pretty straightforward. It finds the boxes, drills all the way down to their materials and then assigns the texture of the first box to the second. The changes are immediately seen in the 3D world.

It’s worth noting that all function calls into the vtx namespace and derived objects are actually jumping into C++. This script is therefore dynamically manipulating engine objects, that’s why we see its effects in the scene view.

The function names are still work in progress, and admittedly, I need to write more scripts to see if these feel comfortable or if they’re too long and therefore hard to remember. My idea is to make the scripting interface as simple to use as possible, so please if you have any suggestions I would love to hear your feedback! Feel free to leave a comment below!

Next week I will continue working on adding more functionality to the scripting API, as well as adding more features to the renderer! Stay tuned for more!

Component Introspection

This week, work on the scripting interface continued. As the image below shows, I can now access an entity’s components and even drill down to its Material through the Lua console.

Introspecting a Render Component to access its material via the Lua interface.

Introspecting a Render Component to access its material via the Lua interface.

The image above shows an example of the scripting interface for entities and components. Here, we are creating a new Entity from the builtin Box primitive and then finding its first component of type “1”. Type 1 is an alias to the Render Component of the Entity. It is responsible for binding a Mesh and a Material together.

Once we have the Render Component, we use it to access its Material property and print its memory address.

As the image shows, although Lua allows for very flexible duck typing, I am performing type checking behind the scenes to make scripting mistakes obvious to the user. Allow me to elaborate:

For the interface, I’ve decided that all components will be hidden behind the vtx.Component “class”. Now, this class will be responsible to exposing the interface to all native component methods, such as get_material(), set_mesh(), get_transform() and so forth.

The problem is, how do we prevent trying to access the material property of a Component that doesn’t have one, such as the Md2AnimationComponent? In my mind, there are two ways. I’m going to call them the “JavaScript” way and the “Python” way.

In the JavaScript way, we don’t really care. We allow calling any method on any component and silently fail when a mismatch is detected. We may return nil or “undefined”, but at no point are we raising an error.

In the Python way, we will perform a sanity check before actually invoking the function on the component, and actually halt the operation when an error is found. You can see that in the example above. Here we’re purposefully attempting to get the material of a Base Component (component type 0), which doesn’t have one. In this case, the Engine detects the inconsistency and raises an error.

I feel the Python way is the way to go to prevent subtle hard-to-debug errors arising from allowing any method to be called on any component and happily carrying on through to -hopefully- reach some sort of result.

A third alternative would have been to actually expose a separate “class” for every component type. This would certainly work, but I’m concerned about a potential “class explosion”, as we continue to add more and more components to the Engine. Furthermore, I feel strongly typed duck typing is a good approach, well in tune with the language philosophy, for a language like Lua.

Now that we can drill all the way down to an Entity’s material, it’s time to expand the interface to allow setting the shader and the material properties, allowing the script developer to control how entities are rendered by the Engine. Stay tuned for more!

Vortex Editor turns 1 year!

I hadn’t realized, but the Vortex Editor turned one year old a couple of months ago. I set off to work on this project in my free time with a clear set of objectives and it’s hard to believe that one year has already passed since the initial kick-off.

Of course, the Editor is closely tied to the Engine, which has seen its fair share of improvements through this year. From building an entirely new deferred renderer to completely replacing the node-based scene graph system to an Entity Component System model that is flexible and extensible, enhancements have been wide and deep.

This post is a short retrospective on the accomplishments of the Vortex Editor and Vortex Engine through this last year.

Vortex Engine Achievements in the last year:

  1. Kicked off the third iteration of the Vortex Engine, codename “V3”.
  2. Upgraded the Graphics API to Core OpenGL 3.3 on Desktop and OpenGL ES 3.0 on Mobile.
  3. Implemented Deferred Rendering from scratch using MRT. Establish the base for PBR rendering.
  4. New Entity Component System model, far more flexible than the old scene graph model and with support for Native and Script Components
  5. Overhaul of several internal engine facilities, such as the Material and Texture systems.
  6. Completely redesigned engine facilities such as Lights and Postprocessing.
  7. New Lua-powered engine scripting.
  8. Ported to Windows, fixing several cross-compiler issues along the way. The engine now builds with clang,
    GCC and MSVC and runs on Linux, Mac, Windows, iOS and Android.
  9. Started moving codebase to Modern C++ (C++11).

Vortex Editor Achievements in the last year:

  1. Successfully kicked off the project. Built from scratch in C++.
  2. Built a comprehensive, modular UI with a context-sensitive architecture that adjusts what you’re doing.
  3. Bootstrapped the project using Vortex Engine 2.0, then quickly moved to V3 once things were stable.
  4. Provide basic serialization/deserialization support for saving and loading projects.
  5. Implemented a Lua REPL that allows talking to the engine directly and script the Editor.
  6. Friendly Drag-and-Drop interface for instantiating new Entities.
  7. Complete visual control over an Entity’s position, orientation and scale in the world, as well as the configuration of its components.
  8. Allow dynamically adding new components to entities to change their behavior.

It has been quite a ride for the last year. I believe all these changes have successfully built and expanded upon the 5 years of work on Vortex 1.1 and 2.0. I’m excited about continuing to work on these projects to ultimately come up with a product that is fun to tinker with.

My objectives for this year two of the Editor include: implement scene and asset packaging, expanded scripting support and PBR rendering.

Stay tuned for more!

Building the Engine Scripting API

Last week when we left off, we were able to implement a Lua REPL in the Vortex Editor console. This week, I wanted to take things further by allowing Lua scripts to access the engine’s state, create new entities and modify their properties.

Scripting Interface to the Engine. A C++ cube entity is instantiated from Lua code that is evaluated on the fly in the console.

Scripting Interface to the Engine. A C++ cube entity is instantiated from Lua code that is evaluated on the fly in the console.

In order to get started, I added a simple single function: vtx_instantiate(). This function is available to Lua, but its actual implementation is provided in native code, in C++. The image above shows how we can use this function to add an entity to the scene from the console.

This simple example allows us to test two important concepts: first, that we can effectively call into C++ from Lua. Second, it shows that we are able to pass in parameters between the two languages. In this case, the single argument expected is a string that specifies which primitive or asset to instantiate.

With this in place, we can now move on to building a more intricate API that enables controlling any aspect of the scene, respond to user input and even implementing an elaborate world simulation.

Best of all, because the Lua VM is embedded into the engine, scripts built against the Vortex API will by definition be portable and run on any platform the engine runs on. This includes, of course, mobile devices.

The idea now is to continue to expand the engine API, developing a rich, easy to use set of functions. API design should prove an interesting exercise. Stay tuned for more!

Adding a Scripting Engine to Vortex

This week I added scripting support to the Engine. I chose to go with Lua because of how easy it is to integrate into existing C/C++ codebases.

Initial integration of a Lua VM in the form of an updated Console

Initial integration of a Lua VM in the form of an updated Console

I’ve mentioned Lua several times before in this blog, but if you’re not familiar with it, it’s a great open source programming language developed at the Catholic University of Rio, Brazil (PUC Rio). It’s very easy to pick up.

Here’s a 10,000 feet view of the language, courtesy of Coffeeghost:

Lua cheatsheet by coffeeghost.

Lua cheatsheet by coffeeghost.

I’ve been interested in adding Lua scripting to the engine for a while now. I finally decided to take the step while I was revisiting serialization and a friend suggested going directly with Lua for the manifest file instead of JSON.

Moving from a “declarative” manifest to an “imperative” one might seem strange, however, it will give me the opportunity to start fleshing out the Lua-to-Engine interface that will later serve engine-wide scripting.

I am very happy with the way things turned out. In the image above you can see how I refactored the Vortex Editor console to now support a full Lua REPL.

Powered by the Lua Engine in Vortex, the console is no longer a place where the engine just prints messages, but rather a true editor shell with a direct interface to the engine. This is similar to what some popular 3D modeling software products do with Python.

I am excited about having Lua scripts as first class citizens in the engine. Expect to see much more Lua in this blog in the upcoming months!

Stay tuned for more!

Deferred Rendering – Part II: Handling Normals

This week work went into adding normal data to the G-Buffer and Light passes. I left normal data out while I was still implementing the architecture for the deferred renderer, so in the previous post we were only doing ambient lighting.

The Stanford Dragon 3D model rendered with a single directional light by the new deferred renderer.

The Stanford Dragon 3D model rendered with a single directional light by the new deferred renderer.

Adding normal data required a few tweaks to several components of the renderer.

First, I had to extend my Vertex Buffers to accommodate an extra 3 floating point values (the normal data). If you’ve been following these post series, you’ll remember from this post that we designed our Interleaved Arrays with the format XYZUVRGBA. I’ve now expanded this format to include per-vertex normals. The new packing is: XYZNNNUVRGBA, where NNN denotes the normal x, y and z values.

Next, the G-Buffer had to be extended to include a third render target: a normal texture. Here is where we stored interpolated normal data calculated during the Geometry pass.

Finally, we extend the Light pass shader to take in the normal texture and sample the per-fragment world-space normal and use it in its light calculation.

The test box renderer with a hard directional light with direction (-1,-1,0).

The test box renderer with a hard directional light with direction (-1,-1,0).

For testing, I used our usual textured box model and directional light with a rotating direction, as depicted above. The box is great for these kinds of tests, as its faces are parallel to the Ox, Oy and Oz planes with opposite value normals.

Soon enough, the test revealed a problem: faces with negative normals where not being lit. Drawing the normal data to the framebuffer during the light pass helped narrow down the problem tremendously. It turns out, normal data sampled from the normal texture was clamped to the [0-1] range. This meant that we could not represent normals with negative components.

Going through the OpenGL documentation for floating point textures revealed the problem. According to the docs, if the data type of a texture ends up being defined as fixed-point, sampled data will be clamped. The problem was caused by a single incorrect parameter in the glTexture2D calls used to create the render textures: the data type was being set to GL_UNSIGNED_BYTE for floating point textures.

The fix was simple enough: set the data type as GL_FLOAT for floating point textures, even if the internal format is already floating-point.

Our old forktruck scene, loaded into the Editor and rendered using the first iteration of the deferred renderer.

Our old forktruck scene, loaded into the Editor and rendered using the first iteration of the deferred renderer.

With this fix in place, I could confirm that we are able to light a surface from any direction from our simple Light pass. The image above shows our old warehouse scene loaded by Editor, and the forktruck is being lit by the deferred renderer!

There are now several paths where work can continue: we can extend our serialization format to include material data, we can continue improving the visuals or we can start testing our code on Mac. Stay tuned for more!

Postprocessing Underpinnings

These past few weeks the majority of work went into establishing the underpinnings for frame postprocessing in the new V3 renderer.

A postprocessing effect that renders the framebuffer contents in grayscale.

A postprocessing effect that renders the framebuffer contents in grayscale.

Vortex 2.0 was the first version of the engine to introduce support for custom shaders and, although this opened the door to implement postprocessing effects, the API was cumbersome to use this way.

In general, the process would boil down to having two separate scene graphs and manually controlling the render-to-texture process. This would spill many engine details to user programs and was prone to breaking if the engine changed. This would also mean engine users would have to write hundreds of lines of code.

With V3 I want to make render-to-texture the default render mode. This means the engine will never render directly into the default framebuffer but, rather, we always render to an FBO object that we can then postprocess.

Architecting the renderer this way provides the opportunity to implement a myriad of effects that will up the visuals significantly while also keeping the nitty gritty details hidden under the hood.

The image above shows a postprocessed scene where the framebuffer contents were desaturated while being blit onto the screen, producing a grayscale image.

For the upcoming weeks, work will focus on building upon this functionality to develop the components necessary to support more advanced rendering in V3. Stay tuned for more!

Wrapping up 2016: Lessons Learned from working on Vortex

2016 marks a year where a lot of work went into both the Vortex Engine and the Vortex Editor. Working on both these projects during my free time has been great to continue to hone my C++ and OpenGL skills. In this post, I am going to do a quick retrospective on the work done and present a few lessons learned.

Lessons on making such an UI-heavy application

Rendering a deserialized scene using the new renderer.

Rendering a deserialized scene using the new renderer.

The way I decided to approach this work was to divide Vortex into two separate projects: the Vortex Engine project and a brand new Editor built on top of it. The Engine itself has been on-going since 2010.

Today, both projects have reached a level of maturity where there we could clearly have two engineers working full-time on them.

The Editor is definitely one of the driving forces that pushes the Engine into supporting more and more visual features. This is to provide the user more power of expression. The amount of work required for exposing these features to the outside world, however, is something that I did not expect.

Let’s walk through a simple example: selecting an Entity and editing it so we can change its properties. In order to do this we must:

  1. Provide the user a means to select an Entity.
  2. React to this, introspecting the selected Entity and its components.
  3. Build and display a custom UI for each component in the Entity.
  4. For each component, render its UI widgets and preload them the component’s UI.
  5. Provide the user the means to change these properties.
  6. Have the changes reflect in the 3D scene immediately.

This system can quickly grow into thousands of lines of code. Even if the code does not have the strenuous performance requirements of the rendering loop, we still need to develop responsive code with a good architecture that allows building more features on top of it.

The rewards from this effort are huge, however. The Editor UI is the main point of interaction of the user with Vortex and it’s the way that she can tell the Engine what she wants it to do. Good UI and, more importantly, good UX, are key in making the Editor enjoyable to the user.

Lessons on going with C++11

C++ Logo

I decided to finally do the jump and update the codebase from C++98 to C++11 and the min-spec for running the renderer from OpenGL ES 2.0 to Core OpenGL 3.3.

Going to C++11 was the right choice. I find that C++11 allows for more power of expression when developing a large C++ codebase and it provides several utilities for making the code look more clean.

There are a few takeaways from using C++11, however, that I think may not be as clear for people just getting started with this version of the language.

Lessons on C++11 enum classes

I like enum classes a lot and I tend to use them as much as possible. There were several places through the legacy Vortex Engine code where old C-style structs and/or static const values that were used to declare configuration parameters did not look too clean. C++ enum classes helped wrap these while also keeping their enclosing namespace clean.

The only limitation I found was using enum classes for bitmasks. enum class members can definitely be cast to make them behave as expected. Doing this however is heavy-handing the type system and some may argue it does away with the advantages of having it.

Additionally, if you’re trying to implicitly cast a binary operator expression involving an enum class value into a bool, you are going to find a roadblock:

I like doing if (mask & kParameterFlag), as I find it more clear to read than having a mandatory comparison against zero at the end, and C++11 enum classes do not provide that option for me.

Lessons on C++11 Weak Pointers

C++11 Shared Pointers (std::shared_ptr and std::weak_ptr) are great for an application like the Editor, where Entity references (pointers) need to be passed around.

Imagine this situation: we have an Entity with a few components that is selected. We have several UI components that are holding pointers to all these objects. Now, if the user decides to delete this entity or remove one of its components, how do we prevent the UI code from dereferencing dangling pointers? We would have to hunt down all references and null them.

Using C++11’s std::weak_ptr, we know that a pointer to a destroyed Entity or Component will fail to lock. This means that the UI controllers can detect situations where they are pointing to deleted data and graciously handle it.

Lessons on C++11 Shared Pointers

Like any other C++ object, smart pointers passed by value will be copied. Unlike copying a raw pointer, however, copying a smart pointer is an expensive operation.

Smart pointers need to keep a reference count to know when it’s okay to delete the managed object and, additionally, C++ mandates that this bookkeeping be performed in a thread-safe fashion. This means that a mutex will have to be locked and unlocked for every smart pointer we create and destroy.

If not careful, your CPU cycles will be spent copying pointers around and you may have a hard time when you try to scale your Engine down to run on a mobile device.

I investigated this issue and found this amazing write up by Herb Sutter: https://herbsutter.com/2013/06/05/gotw-91-solution-smart-pointer-parameters/. The idea is to avoid passing shared pointers by copy to any function that does not intend to keep a long-term reference to the object.

Don’t be afraid of calling std::shared_ptr::get() for passing a pointer to a short, pure function that performs temporary work and only opt into smart pointers when you want to signal to the outside world that you want to share the ownership of the passed in pointer.

Lessons on using Core OpenGL

OpenGL Logo. Copyright (C) Khronos Group.

OpenGL Logo. Copyright (C) Khronos Group.

Choosing to go for a specific minimum version of Core OpenGL helps root out all the questions that pop up every time you use anything outside OpenGL 1.1 and wonder if you should implement the ARB / EXT variants as well.

Core OpenGL 3.3 makes it easier to discover the optimal GPU usage path, as you are now required to use VBOs, VAOs, Shaders and other modern Video Card constructs. It also has the added advantage that it will make it so that legacy OpenGL calls (which should not be used anyways) will not work.

Differences in OpenGL, however, are still pervasive enough so that code that is tested and verified to work on Windows may not work at all on OSX due to differences in the video driver. Moving the codebase to a mobile device will again prove a challenge.

The lesson here is to never assume that your OpenGL code works. Always test on all the platforms you claim to support.

In Closing

These were some of the most prominent lessons learned from working on Vortex this year. I am looking forward to continuing to work on these projects through 2017!

I think we’ve only scratched the surface of what the new renderer’s architecture can help build and I definitely want to continue developing the renderer to support more immersive experiences, as well as the Editor so it exposes even more features to the user!

Thank you for joining in though the year and, as usual, stay tuned for more!