Deferred Realtime Point Lights

It has been a while since my last update, but I’m excited to share significant progress with the new renderer. As of a couple of weeks ago, the new renderer has now support for realtime deferred point lights!

Point Lights in Vortex Engine 3.0's Deferred Renderer. Sponza scene Copyright (C) Crytek.

Point Lights in Vortex Engine 3.0’s Deferred Renderer. Sponza scene Copyright (C) Crytek.

Point lights in a deferred renderer a bit more complicated to implement than directional lights. For directional lights, we can usually get away with drawing a fullscreen quad to calculate the light contribution to the scene. With point lights, we need to render a light volume for each light, calculating the light contribution for the intersecting meshes.

The following image is from one of the earlier tests I was conducting while implementing the lights. Here, I decided to render the light volumes as wireframe meshes for debugging purposes.

Deferred Point Lights with their Volumes rendered as a wireframe.

Deferred Point Lights with their Volumes rendered as a wireframe.

If you look closely, you can see how each light is contained to a sphere and only contributes to the portions of the scene it is intersecting. This is the great advantage of a deferred renderer when compared to a traditional forward renderer.

In a forward renderer, we would have had to draw the entire scene for each light. Only at the very end of the pipeline, would we realize that a point light contributed nothing to a fragment. At this point, however, we would have already performed all the operations in the fragment shader. In comparison, a deferred renderer only computes the subsection of the screen affected by each light volume. This allows for having very large numbers of realtime lights in a scene, with the total cost of having lots of lights on screen amounting to about just one big light.

Determining Light Intersections

One problem that arises when rendering point light volumes is determining the intersection with the scene geometry. There are different ways of solving this problem. I decided to base my approach on this classic presentation by NVIDIA.

Light Volume Stencil Testing. We use the stencil buffer to determine which fragments are at the intersection of the light volume with a mesh.

Light Volume Stencil Testing. We use the stencil buffer to determine which fragments are at the intersection of the light volume with a mesh.

The idea is to use the stencil buffer to cleverly test the light volumes against the z-buffer. In order for this to work, I had to do a pre-pass, rendering the back faces of the light volumes. During this pass, we update the stencil value only on z-fail. Z-fail means that we can’t see the back of our light volume because another mesh is there – exactly the intersection we’re looking for!

Once the stencil buffer pass is complete, we do a second pass of the light volumes, this time with the stencil test set to match the reference value (and z-testing disabled). The fragments where the test passes are lit by the light.

The image above shows the idea. In it, you can see how the light volume determines the fragments that the point light is affecting.

Screenshots

Here are some more screenshots of the technique.

In the following image, only the lion head had a bump map. For the rest of the meshes, we’re just using the geometric normal. Even as I was building this system, I was in awe at the incredible interaction of normal mapping with the deferred point lights. Take a look at the lion head (zoom in for more details), the results are astounding.

Vortex Engine 3.0 - Deferred Point Lights interacting with normal mapped and non-normal mapped surfaces.

Vortex Engine 3.0 – Deferred Point Lights interacting with normal mapped and non-normal mapped surfaces.

Here’s our old friend, the test cube, being lit by 3 RGB point lights.

Vortex Engine 3.0 - Our trusty old friend, the test cube, being lit by 3 realtime deferred point lights.

Vortex Engine 3.0 – Our trusty old friend, the test cube, being lit by 3 realtime deferred point lights.

I’m still playing with the overall light intensity scale (i.e. what does “full intensity” mean?). Lights are pretty dim in the Sponza scene, so I might bring them up across the board to be more like in the cube image.

Conclusion

Deferred rendering is definitely an interesting technique that brings a lot to the table. In recent years, it has become superseded by more modern techniques like Forward+, however, the results are undeniable – especially when combined with elaborate shading techniques such as normal mapping.

The next steps will be to implement spot light support and start implementing post processing techniques.

Stay tuned for more!

Binding C++ to Lua

For the last couple of weeks, a lot of work has been going into developing the scripting API that the engine is exposing to Lua. Embedding a scripting language into a large C++ codebase has been a very interesting experience and I’ve been able to experience first hand why Lua is regarded as such a strong scripting language.

Introspection of an Entity from a Lua script running in the Console.

Introspection of an Entity from a Lua script running in the Console.

Lua offers a myriad of ways we can develop a scripting interface for our native code.

A naïve approach would be to expose every function of the engine in the global namespace and have scripts use these directly. Although this method would certainly work, we want to offer an object-oriented API to the engine and its different components, so a more elaborate solution is required.

I ultimately decided to build the interface from scratch, following the Lua concepts of tables and metatables. The reason being that building everything myself would allow me to clearly see the costs of the binding as objects are passed back and forth. This will help keep an eye on performance.

Initial Test of the Lua-C++ binding. In this example, we query and rename an Entity.

Initial Test of the Lua-C++ binding. In this example, we query and rename an Entity.

In order to keep the global namespace as clean as possible, the idea was to create a Lua Table procedurally from the C++ side where all functions and types would live. Conceptually, this table is our namespace, so I named it vtx, accordingly. It’s really the only global variable that the engine registers.

The next step was to start populating the vtx namespace. Two functions I know I wanted to expose right away were Instantiate and Find:

We now have functions. But how do we do objects? And how do we expose our vtx::Entity objects to Lua?

Let’s recap a bit. We know that entities are “engine objects”, in the sense that they live in C++ and their lifecycles are managed by the Vortex Engine. What we want is to provide a lightweight object that Lua can interact with, but when push comes to shove, the native side will be able to leverage the full C++ interface of the engine.

Lua offers the concept of a metatable that helps achieve this. Metatables are can be associated to any table to provide special semantics to them. One special semantic we are interested in is the __index property, which allows implementing the Prototype design pattern.

I won’t go into details of the the Prototype design pattern works, but suffice it to say that whenever a function is called on a table, and the table does not have an implementation for it, the prototype will be responsible to service it.

This is exactly what we want. What we can do then is wrap our vtx::Entity instances in Lua tables and provide a common metatable to all of them that we implement in the C++ side. Even better, because of this approach Lua will take care of passing the Entity Table we are operating on as the first parameter to every function call. We can use this as the “this” object for the method.

Putting it all together, let’s walk over how entities expose the vtx::Entity::setName() function to Lua:

  1. From the native side, we create a metatable. Call it vtx.Entity.
  2. We register in this metatable a C++ function that receives a table and a string and can set the name of a native Entity. We assign it to the “set_name” property of the metatable.
  3. Whenever a script requests an Entity (instantiate, find), the function servicing the call will:
    1. Create a new table.
    2. Set the table’s metatable to vtx.Entity.
    3. Store a pointer to the C++ Entity in it.
  4. When a script invokes the Entity’s set_name function, it will trigger a lookup into the metatable’s functions.
  5. The function we registered under set_name will be called. We are now back in C++.
  6. The native function will pop from the stack a string (the new name) and the “Entity” on which the method was called.
  7. We reinterpret_cast the Entity Table’s stored pointer as a vtx::Entity pointer and call our normal setName() function, passing down the string.

Et voila. That is everything. The second image above shows in the console log how all this looks to a Lua script. At no point must the script developer know that the logic flow is jumping between Lua and C++ as her program executes.

We can also see in the screenshot how the Editor’s entity list picks up the name change. This shows how we are actually altering the real engine objects and not some mock Lua clone.

As I mentioned in the beginning of this post, developing a Lua binding for a large C++ codebase from scratch is a lot of fun. I will continue adding more functionality over the coming weeks and then we’re going to be ready to go back and revisit scene serialization.

Stay tuned for more!

Wrapping up 2016: Lessons Learned from working on Vortex

2016 marks a year where a lot of work went into both the Vortex Engine and the Vortex Editor. Working on both these projects during my free time has been great to continue to hone my C++ and OpenGL skills. In this post, I am going to do a quick retrospective on the work done and present a few lessons learned.

Lessons on making such an UI-heavy application

Rendering a deserialized scene using the new renderer.

Rendering a deserialized scene using the new renderer.

The way I decided to approach this work was to divide Vortex into two separate projects: the Vortex Engine project and a brand new Editor built on top of it. The Engine itself has been on-going since 2010.

Today, both projects have reached a level of maturity where there we could clearly have two engineers working full-time on them.

The Editor is definitely one of the driving forces that pushes the Engine into supporting more and more visual features. This is to provide the user more power of expression. The amount of work required for exposing these features to the outside world, however, is something that I did not expect.

Let’s walk through a simple example: selecting an Entity and editing it so we can change its properties. In order to do this we must:

  1. Provide the user a means to select an Entity.
  2. React to this, introspecting the selected Entity and its components.
  3. Build and display a custom UI for each component in the Entity.
  4. For each component, render its UI widgets and preload them the component’s UI.
  5. Provide the user the means to change these properties.
  6. Have the changes reflect in the 3D scene immediately.

This system can quickly grow into thousands of lines of code. Even if the code does not have the strenuous performance requirements of the rendering loop, we still need to develop responsive code with a good architecture that allows building more features on top of it.

The rewards from this effort are huge, however. The Editor UI is the main point of interaction of the user with Vortex and it’s the way that she can tell the Engine what she wants it to do. Good UI and, more importantly, good UX, are key in making the Editor enjoyable to the user.

Lessons on going with C++11

C++ Logo

I decided to finally do the jump and update the codebase from C++98 to C++11 and the min-spec for running the renderer from OpenGL ES 2.0 to Core OpenGL 3.3.

Going to C++11 was the right choice. I find that C++11 allows for more power of expression when developing a large C++ codebase and it provides several utilities for making the code look more clean.

There are a few takeaways from using C++11, however, that I think may not be as clear for people just getting started with this version of the language.

Lessons on C++11 enum classes

I like enum classes a lot and I tend to use them as much as possible. There were several places through the legacy Vortex Engine code where old C-style structs and/or static const values that were used to declare configuration parameters did not look too clean. C++ enum classes helped wrap these while also keeping their enclosing namespace clean.

The only limitation I found was using enum classes for bitmasks. enum class members can definitely be cast to make them behave as expected. Doing this however is heavy-handing the type system and some may argue it does away with the advantages of having it.

Additionally, if you’re trying to implicitly cast a binary operator expression involving an enum class value into a bool, you are going to find a roadblock:

I like doing if (mask & kParameterFlag), as I find it more clear to read than having a mandatory comparison against zero at the end, and C++11 enum classes do not provide that option for me.

Lessons on C++11 Weak Pointers

C++11 Shared Pointers (std::shared_ptr and std::weak_ptr) are great for an application like the Editor, where Entity references (pointers) need to be passed around.

Imagine this situation: we have an Entity with a few components that is selected. We have several UI components that are holding pointers to all these objects. Now, if the user decides to delete this entity or remove one of its components, how do we prevent the UI code from dereferencing dangling pointers? We would have to hunt down all references and null them.

Using C++11’s std::weak_ptr, we know that a pointer to a destroyed Entity or Component will fail to lock. This means that the UI controllers can detect situations where they are pointing to deleted data and graciously handle it.

Lessons on C++11 Shared Pointers

Like any other C++ object, smart pointers passed by value will be copied. Unlike copying a raw pointer, however, copying a smart pointer is an expensive operation.

Smart pointers need to keep a reference count to know when it’s okay to delete the managed object and, additionally, C++ mandates that this bookkeeping be performed in a thread-safe fashion. This means that a mutex will have to be locked and unlocked for every smart pointer we create and destroy.

If not careful, your CPU cycles will be spent copying pointers around and you may have a hard time when you try to scale your Engine down to run on a mobile device.

I investigated this issue and found this amazing write up by Herb Sutter: https://herbsutter.com/2013/06/05/gotw-91-solution-smart-pointer-parameters/. The idea is to avoid passing shared pointers by copy to any function that does not intend to keep a long-term reference to the object.

Don’t be afraid of calling std::shared_ptr::get() for passing a pointer to a short, pure function that performs temporary work and only opt into smart pointers when you want to signal to the outside world that you want to share the ownership of the passed in pointer.

Lessons on using Core OpenGL

OpenGL Logo. Copyright (C) Khronos Group.

OpenGL Logo. Copyright (C) Khronos Group.

Choosing to go for a specific minimum version of Core OpenGL helps root out all the questions that pop up every time you use anything outside OpenGL 1.1 and wonder if you should implement the ARB / EXT variants as well.

Core OpenGL 3.3 makes it easier to discover the optimal GPU usage path, as you are now required to use VBOs, VAOs, Shaders and other modern Video Card constructs. It also has the added advantage that it will make it so that legacy OpenGL calls (which should not be used anyways) will not work.

Differences in OpenGL, however, are still pervasive enough so that code that is tested and verified to work on Windows may not work at all on OSX due to differences in the video driver. Moving the codebase to a mobile device will again prove a challenge.

The lesson here is to never assume that your OpenGL code works. Always test on all the platforms you claim to support.

In Closing

These were some of the most prominent lessons learned from working on Vortex this year. I am looking forward to continuing to work on these projects through 2017!

I think we’ve only scratched the surface of what the new renderer’s architecture can help build and I definitely want to continue developing the renderer to support more immersive experiences, as well as the Editor so it exposes even more features to the user!

Thank you for joining in though the year and, as usual, stay tuned for more!

Shader Texture Mapping and Interleaved Arrays

Work continues on the new Rendering System for the Vortex Engine.

This week was all about implementing basic texture mapping using GL Core. The following image shows our familiar box, only this time, it’s being textured from a shader.

Texture Mapping via Interleaved Arrays and Shaders in the Vortex Engine.

Texture Mapping via Interleaved Arrays and Shaders in the Vortex Engine.

A number of changes had to go into the Renderer in order to perform texture mapping. These touched almost all the layers of the Engine and Editor.

  1. First, I wrote a “Single Texture” shader in GLSL to perform the perspective transform of the Entity’s mesh, interpolate its UV texture coordinates and sample a texture.
  2. Second, I had to change the way the retained mesh works in order to be able to send texture coordinate data to the video card (more on this later).
  3. Finally, I had to modify the Editor UI to allow selecting which shader is to be used when rendering an Entity.

So, regarding how to submit mesh texture coordinate data to the video card, because in OpenGL Core we use Vertex Buffer Objects (VBOs), it was clear that UVs had to be sent (and retained) in GPU memory too.

There are two ways to achieve this in OpenGL. One way consists in creating several VBOs so that each buffer holds an attribute of the vertex. Position data is stored in its own buffer, texture coordinates are stored their own buffer, per-vertex colors take a third buffer and so on and so forth.

There’s nothing wrong with doing things this way and it definitely works, however, there is one consideration to take into account: when we scatter our vertex data into several buffers, then every frame the GPU will have to collect all this data at render time as part of vertex processing.

I am personally not a big fan of this approach. I prefer interleaving the data myself once and then sending it to the video card in a way that’s already prepared for rendering. OpenGL is pretty flexible in this regard and it lets you interleave all the data you need in any format you may choose.

I’ve chosen to store position data first, then texture coordinate data and, finally, color data (XYZUVRGBA). The retained mesh class will be responsible for tightly packing vertex data into this format.

Once data is copied over to video memory, setting up the vertex attrib pointers can be a little tricky. This is where interleaved arrays become a more challenging than separate attribute buffers. An error here will cause the video card to read garbage memory and possibly segfault in the GPU. This is not a good idea. I’ve seen errors like this bring down entire operating systems and it’s not a pretty picture.

A sheet of paper will help calculate the byte offsets. It’s important to write down the logic and then manually test it using pen and paper. The video driver must never read outside the interleaved array.

Once ready, OpenGL will take care of feeding the vertex data into our shader inputs, where we will interpolate the texture coordinates and successfully sample the bound texture, as shown in the image above.

Now that we have a working foundation where we can develop custom shaders for any Entity in the scene it’s time to start cleaning up resource management. Stay tuned for more!

Vortex V3 Renderer

The past couple of weeks have been super busy at work, but I’ve still managed to get the ball rolling with the new renderer for the Vortex Engine.

This is the first time in years I’ve decided to actually write a completely new renderer for Vortex. This new renderer is a complete clean-room implementation of the rendering logic, designed specifically for the new Entity-Component system in the engine.

Current Rendering Systems in Vortex

Let’s start by taking a look at the current rendering systems in the Vortex Engine. Ever since 2011, Vortex has had two rendering systems: a Fixed Pipeline rendering system and a Programmable Pipeline rendering system.

Dual Pipeline support: a Comparison of the Rendering Pipelines available in Vortex Engine. The image on the left represent the Fixed Pipeline. The image on the right represents the Programmable Pipeline.

Dual Pipeline support: a Comparison of the Rendering Pipelines available in Vortex Engine. The image on the left represent the Fixed Pipeline. The image on the right represents the Programmable Pipeline.

Both these rendering systems are pretty robust. Both have been used to develop and launch successful apps in the iOS App Store and they have proven to be reliable and portable, allowing the programmer to target Linux, Windows, Mac OS X and Android, as well as iOS.

The problem with these renderers is that they were designed with Vortex’s Scenegraph-based API in mind. This means that these renderers do not know anything about Entities or Components, but rather, they work at the Node level.

Moving forward, the direction for the Vortex Engine is to provide an Entity Component interface and move away from the Scenegraph-based API. This means that glue code has to be developed to allow the traditional renderers to draw the Entity-Component hierarchy.

So… why is this a problem?

Why a new Renderer?

As Vortex V3 now provides a brand new Entity-Component hierarchy for expressing scenes, glue code had to be developed in order to leverage the legacy renderers in the Vortex Editor. In the beginning this was not a major problem, however, as the Entity-Component system matures, it’s become ever more difficult to maintain compatibility with the legacy renderers.

PBR Materials in Unreal Engine 4. Image from ArtStation's Pinterest.

PBR Materials in Unreal Engine 4. Image from ArtStation’s Pinterest.

Another factor is the incredible pace at which the rendering practice has continued to develop in these past few years. Nowadays, almost all modern mobile devices have support for OpenGL ES 2.0 and even 3.0, and PBR rendering has gone from a distant possibility to a very real technique for mobile devices. Supporting PBR rendering on these legacy renderers would require a significant rewrite of their core logic.

Finally, from a codebase standpoint, both renderers were implemented more than 5 years ago, back when C++11 was just starting to get adopted and compiler support was very limited. This does not mean that the legacy renderers’ codebases are obsolete by any means, but by leveraging modern C++ techniques, they could be cleaned up significantly.

From all of this, it is clear that a new clean-room implementation of the renderer is needed.

Designing a New Renderer

The idea is for the new renderer to be able to work with the Entity-Component hierarchy natively without a translation layer. It should be able to traverse the hierarchy and determine exactly what needs to be rendered for the current frame.

Once the objects to be renderer have been determined, then, a new and much richer material interface would determine exactly how to draw each object according to its properties.

Just like with the Vortex 2.0 renderer, this new renderer should fully support programmable shaders, but through a simplified API that requires less coding and allows drawing much more interesting objects and visual effects.

Choosing a backing API

Choosing a rendering API used to be a simple decision: pick DirectX for Windows-only code or OpenGL (ES) for portable code. The landscape has changed significantly in the past few years, however, and there are now a plethora of APIs we can choose from to implement hardware-accelerated graphics.

This year alone, the Khronos Group released the Vulkan specification, a new API that tackles some of the problems inherent to OpenGL, as seen in the following image.

Comparison of the OpenGL and Vulkan APIs of the Khronos Group. Slide Copyright (C) 2016 Khronos Group.

Comparison of the OpenGL and Vulkan APIs of the Khronos Group. Slide Copyright (C) 2016 Khronos Group.

Now, both Vulkan and Metal are very appealing low-level APIs that provide a fine degree of control over the hardware, but they are limited in the sense that while Metal is Apple specific, Vulkan is cross-platform but not available on Apple devices.

DirectX 12 is Windows 10 only and that discards it right off the bat (for this project at least). DirectX 11 is a good option but, again, Windows only.

This leaves OpenGL and OpenGL ES as the two remaining options. I’ve decided to settle for Core OpenGL 3.3 at this time. I think it’s an API that exposes enough modern concepts to allow implementing a sophisticated renderer while also remaining fully compatible with Windows, iOS and everything in-between.

I don’t discard the possibility in the future of implementing a dedicated Metal or Vulkan backend for Vortex, and nothing in the Engine design should prevent this from happening, however, at this time, we have to start on a platform that’s available everywhere.

Using Core OpenGL 3.3 will also allow reusing the battle-tested shader API in Vortex. This component has several years of service under its belt and I’d risk to say that all of its bugs have been found and fixed.

Other than this particular component, I’m also reusing the material interface (but completely overhauling it) and developing a new RetainedMesh class for better handling mesh data streaming to the GPU.

Closing Thoughts

Writing a comprehensive renderer is no weekend task. A lot of components must be carefully designed and built to fit together. The room for error is minimal, and any problem in any component that touches anything related to the Video Card can potentially make the entire system fail.

It is, at the same time, one of the most satisfying tasks that I can think of as a software engineer. Once you see it come to life, it’s more than a sum of its parts: it’s a platform for rendering incredible dream worlds on a myriad of platforms.

I will take my time developing this new renderer, enjoying the process along the way.

Stay tuned for more! : )

Excellent Intro to Qt Quick

I came across this video that provides a great introduction to the Qt Quick controls in Qt 5.

It’s very interesting to see how a fully fledged, cross platform app that consumes a Web API can be developed in just over 15 minutes almost without a single line of code.

After seeing this video, I’ve been looking a little more into how C++ can be integrated into Qt Quick apps and, unfortunately, it doesn’t seem to leverage the signal-slot mechanism common to QWidgets applications.

This is a problem, since it means that reusing a large codebase might be a little more involved than just doing a seamless transition from writing QWidgets to slowling rolling out side-by-side Qt Quick panels.

Nonetheless, it’s very impressive and it’s definitely worth taking a look at if you need to develop a quick desktop UI in 10~15 minutes.

Implementing a Waypoint System in the Vortex Engine

This week, we’re back to developing new native components for the Vortex Engine. For this week, the objective was to develop a “Waypoint Tween” component that moves an entity’s position between a series of points.

The new Waypoint Tween Component is used to move a 3D model between four points.

The new Waypoint Tween Component is used to move a 3D model between four points.

There are two main aspects to bringing the system to life: the component implementation in the Vortex Engine and the UI implementation in the Vortex Editor.

At the engine level, the system is implemented via a C++ component that is very fast at the time of performing the math necessary to interpolate point positions based on time and speed.

At the editor level, due to the flexibility of this system, exposing its properties actually required a significant amount of UI work. In this first iteration, points can be specified directly in the component properties of the inspector panel. Later in the game, the the plan is to allow the user to specify the points as actual entities in the world and then reference them.

Animation and Movement

Now, in the animated GIF above, it can be seen that the 3D model is not only moving between the specified points, but it also appears as if the model is running between these.

There are two factors at play here to implement this effect: the MD2 Animation and the Waypoint Tween.

The MD2 Animation and Waypoint Tween Components.

The MD2 Animation and Waypoint Tween Components.

When enabled, the Animate Orientation property of the Waypoint Tween component orients the 3D model so that it’s looking towards the direction of the point it’s moving to.

This propery is optional, as there are some cases where this could be undesirable, for instance, imagine wanting to implement a conveyor belt that moves boxes on top of it. It would look weird if boxes magically rotated on their Oy axis. For a character, on the other hand, it makes complete sense that the model be oriented towards the point it’s moving to.

Regarding the run animation, if you have been following our series on the Vortex Editor, you will remember that when instantiated by the engine, MD2 Models automatically include an MD2 Animation Component that handles everything related to animating the entity.

More details can be found in the post where we detail how MD2 support is implemented, but the idea is that we set the animation to looping “run”.

When we put it all together, we get an MD2 model that is playing a run animation as it patrols between different waypoints in the 3D world.

Waypoint System in Practice

So how can the waypoint system be used in practice? I envision two uses for the waypoint system.

The first one is for environment building. Under this scenario, the component system is used to animate objects in the background of the scene. Case in point, the conveyor belt system described above.

The second use, which might be a little more involved, would be to offload work from scripts. The efficient C++ implementation of the waypoint system would allow a component developed in a scripting language to have an entity move between different points without having to do the math calculations itself.

The dynamic nature of the component would allow this script to add and remove points, as well as interrupting the system at any time to perform other tasks. An example would be a monster that uses the waypoint system to patrol an area of the scene and then, when it’s detected that a player is close to the monster, the system is interrupted and a different system takes over, perhaps to attack the player.

In closing

I had a lot of fun implementing this system, as it brings a lot of options to the table in terms of visually building animated worlds for the Vortex Engine.

The plan for next week is to continue working on the Editor. There is some technical debt on the UI I want to address in order to improve the experience and there are also a couple of extra components I want to implement before moving on to other tasks.

As usual, stay tuned for more!

Visual Editing of Transformations through the Vortex Editor UI

This week I started implementing the properties panel (sometimes called the “inspector” panel) for the Vortex Editor.

The redesigned Transformation Panel in the Vortex Editor vertical slice.

The redesigned Transformation Panel in the Vortex Editor vertical slice.

I originally wanted to go with a table design for the UI. I though it would give me the flexibility of adding as many editable entries as necessary. I even implemented a mockup in the UI that can be seen in previous screenshots of the Editor.

The problem that I found however was that both, the difference in font and text spacing, between the left and right panels of the UI made the layout look uneven. I knew I wanted something more symmetrical for what’s essentially the “home” view of the Editor, so I came up with a new design that can be seen in the image above.

Here, the Transform Panel consists in a custom UI component that is created dynamically and parented to the docked panel on the right. This provides a more uniform layout with two advantages: first, any number of property panels can be created and they will be nicely stacked one after the other in the panel (this will be useful in the future). Second, because the properties panel is detachable, it still allows the user to customize the Editor layout to her liking.

Now, one minor roadblock I’ve encountered from the engine standpoint for fully realizing this idea is the way Vortex currently represents Entity transformations.

All transformations in Vortex are represented as a 4×4 Matrix. The driving force behind this decision was to avoid having to convert between rotation representations at render time, thus saving some time down the line at during any scenegraph traversal passes.

So what does a generic transformation matrix look like in Vortex?

Matrices are a 4×4 list of values representing values in the homogeneous coordinate system:

  \begin{array}{cccc}  sr_{00} & r_{01} & r_{02} & t_{x} \\  r_{10} & sr_{11} & r_{12} & t_{y} \\  r_{20} & r_{21} & sr_{33} & t_{z} \\  0 & 0 & 0 & 1  \end{array}

 

In this matrix, (tx, ty, tz) correspond to the translation component of the Entity and we can easily extract this information to populate the Transformation Panel. But what happens with the rotation and scale?

Rotation and Scale are mixed in together in the matrix (represented by the overlapping sr components), so we can’t really extract the original scale and rotation that generated this matrix. This means that we will only be able to show and edit the position of the Entity and not its rotation or scale.

This is a limitation that has to be lifted.

The plan is to provide a higher-level contraption for describing transformations. Indeed a Transform class that will keep separate tabs for position, rotation and scale but will also provide a convenience method for computing on-demand the transform matrix that this transform represents.

A tentative interface for the Transform class could be:

I think this will be a good change for the Engine. Working with Entities using a position-rotation-scale mindset instead of having to the deal with the cognitive overhead of thinking in terms of representing these as matrix operations will help users be more productive (and save precious keystrokes).

This coming week I will be working on finalizing this implementation and finally exposing full Entity transformation control through the UI. Stay tuned!

The GLSL Shader Editor

This week we take a break from work in the Vortex Editor to revisit an older personal project of mine: the GLSL Shader Editor, a custom editor for OpenGL shaders.

The UI of my custom shader editor.

The UI of my custom shader editor.

The idea of the editor was to allow very fast shader iteration times by providing an area where a shader program could be written and then, by simply pressing Cmd+B (Ctrl+B on Windows), the shader source would be complied and hot-loaded into the running application.

This concept of hot-loading allowed seeing the results of the new shading instantly, without having to stop the app and without even having to save the shader source files. This allowed for very fast turn-around times for experimenting with shader ideas.

As the image above shows, the UI was divided in two main areas: an Edit View and a Render View.

The Edit View consisted in a tabbed component with two text areas. These text areas (named “Vertex” and “Fragment”) are where you could write your custom vertex and fragment shaders respectively. The contents of these two would be the shader source that was be compiled and linked into a shader program.

The shader program would be compiled by pressing Cmd+B and, if no errors were found, then it would be hot-loaded and used to shade the model displayed in the Render View.

The status bar (showing “OK” in the image), would display any shader compilation errors as reported by the video driver.

The application had a number of built-in primitives and it also allowed importing in models in the OBJ format. It was developed on Ubuntu Linux and supported MS Windows and OS X on a number of different video cards.

Application Features

  • Built entirely in C++.
  • Supports Desktop OpenGL 2.0.
  • Qt GUI.
  • Supported platforms: (Ubuntu) Linux, MS Windows, OSX.
  • Diverse set of visual primitives and OBJ import support.
  • Very efficient turn-around times by hot-loading the shader dynamically – no need to save files!
  • GLSL syntax highlighting.
  • Docked, customizable UI.

Interestingly, this project was developed at around the same time that I got started with the Vortex Engine, therefore, it does not use any of Vortex’s code. This means that all shader compiling and loading, as well as all rendering was developed from scratch for this project.

I’ve added a project page for this application (under Personal Projects in the blog menu). I’ve also redesigned the menu to list all different personal projects that I’ve either worked on or that I’m currently working on, so please feel free to check it out.

Next week, we’ll be going back to the Vortex Editor! Stay tuned for more!

Designing the Editor Architecture

Last week I used the (very little) free time that I had to work on the internal architecture of the Editor and how it’s going to interact with the Vortex Engine.

In general terms, the plan is to have all UI interactions be well-defined and go to a Front Controller object that’s going to be responsible for driving the engine. This Front Controller, by definition, will be a one-stop shop for the entire implementation behind the UI and it will also, at a later stage, provide higher-granularity control of the engine.

Vortex Framebuffer Object Support: a knight is rendered on a texture that is then mapped on a cube. All rendering is done on the GPU, avoiding expensive copies to RAM.

Vortex Framebuffer Object Support: a knight is rendered on a texture that is then mapped on a cube. All rendering is done on the GPU, avoiding expensive copies to RAM.

Other components I’ve been designing include an undo/redo stack (which is super important for an editor application) and a scripting API. It’s still early for both these components, but I think it’s better if the design supports these from early on as opposed to trying to tack them on to the Editor at a later stage.

Finally, last week I took the time to bootstrap a higher OpenGL version on Windows. The Editor now has access to full OpenGL on this platform. This is a significant milestone that opens the door for bringing in to Windows Vortex’s advanced rendering techniques, such as FBO objects as depicted in the image above.

I’ve only got a short update for this week. Stay tuned for more to come : )