Wrapping up 2016: Lessons Learned from working on Vortex

2016 marks a year where a lot of work went into both the Vortex Engine and the Vortex Editor. Working on both these projects during my free time has been great to continue to hone my C++ and OpenGL skills. In this post, I am going to do a quick retrospective on the work done and present a few lessons learned.

Lessons on making such an UI-heavy application

Rendering a deserialized scene using the new renderer.

Rendering a deserialized scene using the new renderer.

The way I decided to approach this work was to divide Vortex into two separate projects: the Vortex Engine project and a brand new Editor built on top of it. The Engine itself has been on-going since 2010.

Today, both projects have reached a level of maturity where there we could clearly have two engineers working full-time on them.

The Editor is definitely one of the driving forces that pushes the Engine into supporting more and more visual features. This is to provide the user more power of expression. The amount of work required for exposing these features to the outside world, however, is something that I did not expect.

Let’s walk through a simple example: selecting an Entity and editing it so we can change its properties. In order to do this we must:

  1. Provide the user a means to select an Entity.
  2. React to this, introspecting the selected Entity and its components.
  3. Build and display a custom UI for each component in the Entity.
  4. For each component, render its UI widgets and preload them the component’s UI.
  5. Provide the user the means to change these properties.
  6. Have the changes reflect in the 3D scene immediately.

This system can quickly grow into thousands of lines of code. Even if the code does not have the strenuous performance requirements of the rendering loop, we still need to develop responsive code with a good architecture that allows building more features on top of it.

The rewards from this effort are huge, however. The Editor UI is the main point of interaction of the user with Vortex and it’s the way that she can tell the Engine what she wants it to do. Good UI and, more importantly, good UX, are key in making the Editor enjoyable to the user.

Lessons on going with C++11

C++ Logo

I decided to finally do the jump and update the codebase from C++98 to C++11 and the min-spec for running the renderer from OpenGL ES 2.0 to Core OpenGL 3.3.

Going to C++11 was the right choice. I find that C++11 allows for more power of expression when developing a large C++ codebase and it provides several utilities for making the code look more clean.

There are a few takeaways from using C++11, however, that I think may not be as clear for people just getting started with this version of the language.

Lessons on C++11 enum classes

I like enum classes a lot and I tend to use them as much as possible. There were several places through the legacy Vortex Engine code where old C-style structs and/or static const values that were used to declare configuration parameters did not look too clean. C++ enum classes helped wrap these while also keeping their enclosing namespace clean.

The only limitation I found was using enum classes for bitmasks. enum class members can definitely be cast to make them behave as expected. Doing this however is heavy-handing the type system and some may argue it does away with the advantages of having it.

Additionally, if you’re trying to implicitly cast a binary operator expression involving an enum class value into a bool, you are going to find a roadblock:

I like doing if (mask & kParameterFlag), as I find it more clear to read than having a mandatory comparison against zero at the end, and C++11 enum classes do not provide that option for me.

Lessons on C++11 Weak Pointers

C++11 Shared Pointers (std::shared_ptr and std::weak_ptr) are great for an application like the Editor, where Entity references (pointers) need to be passed around.

Imagine this situation: we have an Entity with a few components that is selected. We have several UI components that are holding pointers to all these objects. Now, if the user decides to delete this entity or remove one of its components, how do we prevent the UI code from dereferencing dangling pointers? We would have to hunt down all references and null them.

Using C++11’s std::weak_ptr, we know that a pointer to a destroyed Entity or Component will fail to lock. This means that the UI controllers can detect situations where they are pointing to deleted data and graciously handle it.

Lessons on C++11 Shared Pointers

Like any other C++ object, smart pointers passed by value will be copied. Unlike copying a raw pointer, however, copying a smart pointer is an expensive operation.

Smart pointers need to keep a reference count to know when it’s okay to delete the managed object and, additionally, C++ mandates that this bookkeeping be performed in a thread-safe fashion. This means that a mutex will have to be locked and unlocked for every smart pointer we create and destroy.

If not careful, your CPU cycles will be spent copying pointers around and you may have a hard time when you try to scale your Engine down to run on a mobile device.

I investigated this issue and found this amazing write up by Herb Sutter: https://herbsutter.com/2013/06/05/gotw-91-solution-smart-pointer-parameters/. The idea is to avoid passing shared pointers by copy to any function that does not intend to keep a long-term reference to the object.

Don’t be afraid of calling std::shared_ptr::get() for passing a pointer to a short, pure function that performs temporary work and only opt into smart pointers when you want to signal to the outside world that you want to share the ownership of the passed in pointer.

Lessons on using Core OpenGL

OpenGL Logo. Copyright (C) Khronos Group.

OpenGL Logo. Copyright (C) Khronos Group.

Choosing to go for a specific minimum version of Core OpenGL helps root out all the questions that pop up every time you use anything outside OpenGL 1.1 and wonder if you should implement the ARB / EXT variants as well.

Core OpenGL 3.3 makes it easier to discover the optimal GPU usage path, as you are now required to use VBOs, VAOs, Shaders and other modern Video Card constructs. It also has the added advantage that it will make it so that legacy OpenGL calls (which should not be used anyways) will not work.

Differences in OpenGL, however, are still pervasive enough so that code that is tested and verified to work on Windows may not work at all on OSX due to differences in the video driver. Moving the codebase to a mobile device will again prove a challenge.

The lesson here is to never assume that your OpenGL code works. Always test on all the platforms you claim to support.

In Closing

These were some of the most prominent lessons learned from working on Vortex this year. I am looking forward to continuing to work on these projects through 2017!

I think we’ve only scratched the surface of what the new renderer’s architecture can help build and I definitely want to continue developing the renderer to support more immersive experiences, as well as the Editor so it exposes even more features to the user!

Thank you for joining in though the year and, as usual, stay tuned for more!

Reaching Feature Parity

Last week work continued on different parts of the Editor. As the new renderer falls into place, I’ve been going through the Editor code, finding scaffolding and other legacy code pieces that were built to make the Editor originally work with the 2011 renderer, but that were disabled as part of the renderer update process.

Rendering a deserialized scene using the new renderer.

Rendering a deserialized scene using the new renderer.

As I was working on the Editor, I was careful to have a clearly defined separation between Editor code and Engine code. This helped keep the impact of swapping out the renderer for a new one limited to a single C++ file.

Changing the renderer, however, did bring a fair share of feature regressions to the Editor, as some semantics had changed in terms of where entities need to be registered to have them drawn and how the UI controllers inspect their components.

Today, after several fixes here and there, I’ve been able to load a scene that had been serialized using the old Editor and have its entities displayed in the 3D World. This is good confirmation that the Editor is reaching a stable point with feature parity to what we had a few months ago when we decided the rewrite the renderer.

Working on both the Editor and the Engine is and has been an amazing experience. In next week’s post, as a way to wrap up year, I’m going to break down a few of the lessons learned. Stay tuned for more!

New RenderCamera Work

This week I reimplemented the camera logic from scratch.

The previous camera controls in Vortex date from around 2010 and, even though it worked and helped release a number of Apps using it, it had some limitations that made it difficult to build new features on top of it.

I was interested, in particular, in providing a mechanism through which camera animations could be controlled from native code and scripts, rather than just responding to user input directly.

The new vtx::RenderCamera is implemented as a Component that can be attached to an Entity directly, basically providing the potential to make any entity in the scene a camera. Once the active camera is selected, this is passed down to the new Rendering System to draw the scene from the camera’s point of view.

This allows configuring a scene with several “vantage point” cameras in order to dynamically switch them around to make different cuts. At the same time, because cameras are attached to entities, entity movement provides a natural means to pan and rotate cameras through the scene.

I’m very happy with the result of this rework, as it provides exactly all the functionality we had before on top of a solid, expandable foundation.

Unfortunately, we still don’t have a way to draw a camera gizmo in the Editor, so we have no screenshot for this week. Next week, however, I’m planning on continue working on some visual features, so definitely stay tuned for more!

Material Editing via Shader Introspection

Last week I started to work on an advanced Material Editor for Vortex. If you remember, one of the objectives of the new V3 Renderer was to completely overhaul the material system of the engine as to support advanced rendering techniques.

New Material Editor exposes the Shader Uniforms so they can be easily tuned.

New Material Editor exposes the Shader Uniforms so they can be easily tuned.

Building a dedicated editor that allows changing the material properties on the fly will help tune shader properties in order to achieve compelling visual results.

Ever since I developed the original Shader Composer project years ago, I’ve been puzzled about how one might go about providing a flexible enough means to supply the shader with meaningful values from a UI. The solution to this problem comes in the form of shader introspection.

The idea is that, once the shaders have been compiled and linked together, we will pull the (referenced) shader Uniforms from the GL. This allows dynamically asking for what properties can be provided to the shader. This allows configuring its behavior at runtime. No need to stop the Editor or App to change values and recompile.

The image above shows a basic textfield-based editor that can be used to set the uniform values for the shader associated to the selected Entity. The plan moving forward is to provide specialized editors that help entering vectors, matrices or even an image reference.

We’ll continue with this next week. Stay tuned for more!