Wrapping up 2016: Lessons Learned from working on Vortex

2016 marks a year where a lot of work went into both the Vortex Engine and the Vortex Editor. Working on both these projects during my free time has been great to continue to hone my C++ and OpenGL skills. In this post, I am going to do a quick retrospective on the work done and present a few lessons learned.

Lessons on making such an UI-heavy application

Rendering a deserialized scene using the new renderer.

Rendering a deserialized scene using the new renderer.

The way I decided to approach this work was to divide Vortex into two separate projects: the Vortex Engine project and a brand new Editor built on top of it. The Engine itself has been on-going since 2010.

Today, both projects have reached a level of maturity where there we could clearly have two engineers working full-time on them.

The Editor is definitely one of the driving forces that pushes the Engine into supporting more and more visual features. This is to provide the user more power of expression. The amount of work required for exposing these features to the outside world, however, is something that I did not expect.

Let’s walk through a simple example: selecting an Entity and editing it so we can change its properties. In order to do this we must:

  1. Provide the user a means to select an Entity.
  2. React to this, introspecting the selected Entity and its components.
  3. Build and display a custom UI for each component in the Entity.
  4. For each component, render its UI widgets and preload them the component’s UI.
  5. Provide the user the means to change these properties.
  6. Have the changes reflect in the 3D scene immediately.

This system can quickly grow into thousands of lines of code. Even if the code does not have the strenuous performance requirements of the rendering loop, we still need to develop responsive code with a good architecture that allows building more features on top of it.

The rewards from this effort are huge, however. The Editor UI is the main point of interaction of the user with Vortex and it’s the way that she can tell the Engine what she wants it to do. Good UI and, more importantly, good UX, are key in making the Editor enjoyable to the user.

Lessons on going with C++11

C++ Logo

I decided to finally do the jump and update the codebase from C++98 to C++11 and the min-spec for running the renderer from OpenGL ES 2.0 to Core OpenGL 3.3.

Going to C++11 was the right choice. I find that C++11 allows for more power of expression when developing a large C++ codebase and it provides several utilities for making the code look more clean.

There are a few takeaways from using C++11, however, that I think may not be as clear for people just getting started with this version of the language.

Lessons on C++11 enum classes

I like enum classes a lot and I tend to use them as much as possible. There were several places through the legacy Vortex Engine code where old C-style structs and/or static const values that were used to declare configuration parameters did not look too clean. C++ enum classes helped wrap these while also keeping their enclosing namespace clean.

The only limitation I found was using enum classes for bitmasks. enum class members can definitely be cast to make them behave as expected. Doing this however is heavy-handing the type system and some may argue it does away with the advantages of having it.

Additionally, if you’re trying to implicitly cast a binary operator expression involving an enum class value into a bool, you are going to find a roadblock:

I like doing if (mask & kParameterFlag), as I find it more clear to read than having a mandatory comparison against zero at the end, and C++11 enum classes do not provide that option for me.

Lessons on C++11 Weak Pointers

C++11 Shared Pointers (std::shared_ptr and std::weak_ptr) are great for an application like the Editor, where Entity references (pointers) need to be passed around.

Imagine this situation: we have an Entity with a few components that is selected. We have several UI components that are holding pointers to all these objects. Now, if the user decides to delete this entity or remove one of its components, how do we prevent the UI code from dereferencing dangling pointers? We would have to hunt down all references and null them.

Using C++11’s std::weak_ptr, we know that a pointer to a destroyed Entity or Component will fail to lock. This means that the UI controllers can detect situations where they are pointing to deleted data and graciously handle it.

Lessons on C++11 Shared Pointers

Like any other C++ object, smart pointers passed by value will be copied. Unlike copying a raw pointer, however, copying a smart pointer is an expensive operation.

Smart pointers need to keep a reference count to know when it’s okay to delete the managed object and, additionally, C++ mandates that this bookkeeping be performed in a thread-safe fashion. This means that a mutex will have to be locked and unlocked for every smart pointer we create and destroy.

If not careful, your CPU cycles will be spent copying pointers around and you may have a hard time when you try to scale your Engine down to run on a mobile device.

I investigated this issue and found this amazing write up by Herb Sutter: https://herbsutter.com/2013/06/05/gotw-91-solution-smart-pointer-parameters/. The idea is to avoid passing shared pointers by copy to any function that does not intend to keep a long-term reference to the object.

Don’t be afraid of calling std::shared_ptr::get() for passing a pointer to a short, pure function that performs temporary work and only opt into smart pointers when you want to signal to the outside world that you want to share the ownership of the passed in pointer.

Lessons on using Core OpenGL

OpenGL Logo. Copyright (C) Khronos Group.

OpenGL Logo. Copyright (C) Khronos Group.

Choosing to go for a specific minimum version of Core OpenGL helps root out all the questions that pop up every time you use anything outside OpenGL 1.1 and wonder if you should implement the ARB / EXT variants as well.

Core OpenGL 3.3 makes it easier to discover the optimal GPU usage path, as you are now required to use VBOs, VAOs, Shaders and other modern Video Card constructs. It also has the added advantage that it will make it so that legacy OpenGL calls (which should not be used anyways) will not work.

Differences in OpenGL, however, are still pervasive enough so that code that is tested and verified to work on Windows may not work at all on OSX due to differences in the video driver. Moving the codebase to a mobile device will again prove a challenge.

The lesson here is to never assume that your OpenGL code works. Always test on all the platforms you claim to support.

In Closing

These were some of the most prominent lessons learned from working on Vortex this year. I am looking forward to continuing to work on these projects through 2017!

I think we’ve only scratched the surface of what the new renderer’s architecture can help build and I definitely want to continue developing the renderer to support more immersive experiences, as well as the Editor so it exposes even more features to the user!

Thank you for joining in though the year and, as usual, stay tuned for more!

Shader Texture Mapping and Interleaved Arrays

Work continues on the new Rendering System for the Vortex Engine.

This week was all about implementing basic texture mapping using GL Core. The following image shows our familiar box, only this time, it’s being textured from a shader.

Texture Mapping via Interleaved Arrays and Shaders in the Vortex Engine.

Texture Mapping via Interleaved Arrays and Shaders in the Vortex Engine.

A number of changes had to go into the Renderer in order to perform texture mapping. These touched almost all the layers of the Engine and Editor.

  1. First, I wrote a “Single Texture” shader in GLSL to perform the perspective transform of the Entity’s mesh, interpolate its UV texture coordinates and sample a texture.
  2. Second, I had to change the way the retained mesh works in order to be able to send texture coordinate data to the video card (more on this later).
  3. Finally, I had to modify the Editor UI to allow selecting which shader is to be used when rendering an Entity.

So, regarding how to submit mesh texture coordinate data to the video card, because in OpenGL Core we use Vertex Buffer Objects (VBOs), it was clear that UVs had to be sent (and retained) in GPU memory too.

There are two ways to achieve this in OpenGL. One way consists in creating several VBOs so that each buffer holds an attribute of the vertex. Position data is stored in its own buffer, texture coordinates are stored their own buffer, per-vertex colors take a third buffer and so on and so forth.

There’s nothing wrong with doing things this way and it definitely works, however, there is one consideration to take into account: when we scatter our vertex data into several buffers, then every frame the GPU will have to collect all this data at render time as part of vertex processing.

I am personally not a big fan of this approach. I prefer interleaving the data myself once and then sending it to the video card in a way that’s already prepared for rendering. OpenGL is pretty flexible in this regard and it lets you interleave all the data you need in any format you may choose.

I’ve chosen to store position data first, then texture coordinate data and, finally, color data (XYZUVRGBA). The retained mesh class will be responsible for tightly packing vertex data into this format.

Once data is copied over to video memory, setting up the vertex attrib pointers can be a little tricky. This is where interleaved arrays become a more challenging than separate attribute buffers. An error here will cause the video card to read garbage memory and possibly segfault in the GPU. This is not a good idea. I’ve seen errors like this bring down entire operating systems and it’s not a pretty picture.

A sheet of paper will help calculate the byte offsets. It’s important to write down the logic and then manually test it using pen and paper. The video driver must never read outside the interleaved array.

Once ready, OpenGL will take care of feeding the vertex data into our shader inputs, where we will interpolate the texture coordinates and successfully sample the bound texture, as shown in the image above.

Now that we have a working foundation where we can develop custom shaders for any Entity in the scene it’s time to start cleaning up resource management. Stay tuned for more!

Vortex V3 Renderer

The past couple of weeks have been super busy at work, but I’ve still managed to get the ball rolling with the new renderer for the Vortex Engine.

This is the first time in years I’ve decided to actually write a completely new renderer for Vortex. This new renderer is a complete clean-room implementation of the rendering logic, designed specifically for the new Entity-Component system in the engine.

Current Rendering Systems in Vortex

Let’s start by taking a look at the current rendering systems in the Vortex Engine. Ever since 2011, Vortex has had two rendering systems: a Fixed Pipeline rendering system and a Programmable Pipeline rendering system.

Dual Pipeline support: a Comparison of the Rendering Pipelines available in Vortex Engine. The image on the left represent the Fixed Pipeline. The image on the right represents the Programmable Pipeline.

Dual Pipeline support: a Comparison of the Rendering Pipelines available in Vortex Engine. The image on the left represent the Fixed Pipeline. The image on the right represents the Programmable Pipeline.

Both these rendering systems are pretty robust. Both have been used to develop and launch successful apps in the iOS App Store and they have proven to be reliable and portable, allowing the programmer to target Linux, Windows, Mac OS X and Android, as well as iOS.

The problem with these renderers is that they were designed with Vortex’s Scenegraph-based API in mind. This means that these renderers do not know anything about Entities or Components, but rather, they work at the Node level.

Moving forward, the direction for the Vortex Engine is to provide an Entity Component interface and move away from the Scenegraph-based API. This means that glue code has to be developed to allow the traditional renderers to draw the Entity-Component hierarchy.

So… why is this a problem?

Why a new Renderer?

As Vortex V3 now provides a brand new Entity-Component hierarchy for expressing scenes, glue code had to be developed in order to leverage the legacy renderers in the Vortex Editor. In the beginning this was not a major problem, however, as the Entity-Component system matures, it’s become ever more difficult to maintain compatibility with the legacy renderers.

PBR Materials in Unreal Engine 4. Image from ArtStation's Pinterest.

PBR Materials in Unreal Engine 4. Image from ArtStation’s Pinterest.

Another factor is the incredible pace at which the rendering practice has continued to develop in these past few years. Nowadays, almost all modern mobile devices have support for OpenGL ES 2.0 and even 3.0, and PBR rendering has gone from a distant possibility to a very real technique for mobile devices. Supporting PBR rendering on these legacy renderers would require a significant rewrite of their core logic.

Finally, from a codebase standpoint, both renderers were implemented more than 5 years ago, back when C++11 was just starting to get adopted and compiler support was very limited. This does not mean that the legacy renderers’ codebases are obsolete by any means, but by leveraging modern C++ techniques, they could be cleaned up significantly.

From all of this, it is clear that a new clean-room implementation of the renderer is needed.

Designing a New Renderer

The idea is for the new renderer to be able to work with the Entity-Component hierarchy natively without a translation layer. It should be able to traverse the hierarchy and determine exactly what needs to be rendered for the current frame.

Once the objects to be renderer have been determined, then, a new and much richer material interface would determine exactly how to draw each object according to its properties.

Just like with the Vortex 2.0 renderer, this new renderer should fully support programmable shaders, but through a simplified API that requires less coding and allows drawing much more interesting objects and visual effects.

Choosing a backing API

Choosing a rendering API used to be a simple decision: pick DirectX for Windows-only code or OpenGL (ES) for portable code. The landscape has changed significantly in the past few years, however, and there are now a plethora of APIs we can choose from to implement hardware-accelerated graphics.

This year alone, the Khronos Group released the Vulkan specification, a new API that tackles some of the problems inherent to OpenGL, as seen in the following image.

Comparison of the OpenGL and Vulkan APIs of the Khronos Group. Slide Copyright (C) 2016 Khronos Group.

Comparison of the OpenGL and Vulkan APIs of the Khronos Group. Slide Copyright (C) 2016 Khronos Group.

Now, both Vulkan and Metal are very appealing low-level APIs that provide a fine degree of control over the hardware, but they are limited in the sense that while Metal is Apple specific, Vulkan is cross-platform but not available on Apple devices.

DirectX 12 is Windows 10 only and that discards it right off the bat (for this project at least). DirectX 11 is a good option but, again, Windows only.

This leaves OpenGL and OpenGL ES as the two remaining options. I’ve decided to settle for Core OpenGL 3.3 at this time. I think it’s an API that exposes enough modern concepts to allow implementing a sophisticated renderer while also remaining fully compatible with Windows, iOS and everything in-between.

I don’t discard the possibility in the future of implementing a dedicated Metal or Vulkan backend for Vortex, and nothing in the Engine design should prevent this from happening, however, at this time, we have to start on a platform that’s available everywhere.

Using Core OpenGL 3.3 will also allow reusing the battle-tested shader API in Vortex. This component has several years of service under its belt and I’d risk to say that all of its bugs have been found and fixed.

Other than this particular component, I’m also reusing the material interface (but completely overhauling it) and developing a new RetainedMesh class for better handling mesh data streaming to the GPU.

Closing Thoughts

Writing a comprehensive renderer is no weekend task. A lot of components must be carefully designed and built to fit together. The room for error is minimal, and any problem in any component that touches anything related to the Video Card can potentially make the entire system fail.

It is, at the same time, one of the most satisfying tasks that I can think of as a software engineer. Once you see it come to life, it’s more than a sum of its parts: it’s a platform for rendering incredible dream worlds on a myriad of platforms.

I will take my time developing this new renderer, enjoying the process along the way.

Stay tuned for more! : )

Excellent Intro to Qt Quick

I came across this video that provides a great introduction to the Qt Quick controls in Qt 5.

It’s very interesting to see how a fully fledged, cross platform app that consumes a Web API can be developed in just over 15 minutes almost without a single line of code.

After seeing this video, I’ve been looking a little more into how C++ can be integrated into Qt Quick apps and, unfortunately, it doesn’t seem to leverage the signal-slot mechanism common to QWidgets applications.

This is a problem, since it means that reusing a large codebase might be a little more involved than just doing a seamless transition from writing QWidgets to slowling rolling out side-by-side Qt Quick panels.

Nonetheless, it’s very impressive and it’s definitely worth taking a look at if you need to develop a quick desktop UI in 10~15 minutes.

Implementing a Waypoint System in the Vortex Engine

This week, we’re back to developing new native components for the Vortex Engine. For this week, the objective was to develop a “Waypoint Tween” component that moves an entity’s position between a series of points.

The new Waypoint Tween Component is used to move a 3D model between four points.

The new Waypoint Tween Component is used to move a 3D model between four points.

There are two main aspects to bringing the system to life: the component implementation in the Vortex Engine and the UI implementation in the Vortex Editor.

At the engine level, the system is implemented via a C++ component that is very fast at the time of performing the math necessary to interpolate point positions based on time and speed.

At the editor level, due to the flexibility of this system, exposing its properties actually required a significant amount of UI work. In this first iteration, points can be specified directly in the component properties of the inspector panel. Later in the game, the the plan is to allow the user to specify the points as actual entities in the world and then reference them.

Animation and Movement

Now, in the animated GIF above, it can be seen that the 3D model is not only moving between the specified points, but it also appears as if the model is running between these.

There are two factors at play here to implement this effect: the MD2 Animation and the Waypoint Tween.

The MD2 Animation and Waypoint Tween Components.

The MD2 Animation and Waypoint Tween Components.

When enabled, the Animate Orientation property of the Waypoint Tween component orients the 3D model so that it’s looking towards the direction of the point it’s moving to.

This propery is optional, as there are some cases where this could be undesirable, for instance, imagine wanting to implement a conveyor belt that moves boxes on top of it. It would look weird if boxes magically rotated on their Oy axis. For a character, on the other hand, it makes complete sense that the model be oriented towards the point it’s moving to.

Regarding the run animation, if you have been following our series on the Vortex Editor, you will remember that when instantiated by the engine, MD2 Models automatically include an MD2 Animation Component that handles everything related to animating the entity.

More details can be found in the post where we detail how MD2 support is implemented, but the idea is that we set the animation to looping “run”.

When we put it all together, we get an MD2 model that is playing a run animation as it patrols between different waypoints in the 3D world.

Waypoint System in Practice

So how can the waypoint system be used in practice? I envision two uses for the waypoint system.

The first one is for environment building. Under this scenario, the component system is used to animate objects in the background of the scene. Case in point, the conveyor belt system described above.

The second use, which might be a little more involved, would be to offload work from scripts. The efficient C++ implementation of the waypoint system would allow a component developed in a scripting language to have an entity move between different points without having to do the math calculations itself.

The dynamic nature of the component would allow this script to add and remove points, as well as interrupting the system at any time to perform other tasks. An example would be a monster that uses the waypoint system to patrol an area of the scene and then, when it’s detected that a player is close to the monster, the system is interrupted and a different system takes over, perhaps to attack the player.

In closing

I had a lot of fun implementing this system, as it brings a lot of options to the table in terms of visually building animated worlds for the Vortex Engine.

The plan for next week is to continue working on the Editor. There is some technical debt on the UI I want to address in order to improve the experience and there are also a couple of extra components I want to implement before moving on to other tasks.

As usual, stay tuned for more!

Visual Editing of Transformations through the Vortex Editor UI

This week I started implementing the properties panel (sometimes called the “inspector” panel) for the Vortex Editor.

The redesigned Transformation Panel in the Vortex Editor vertical slice.

The redesigned Transformation Panel in the Vortex Editor vertical slice.

I originally wanted to go with a table design for the UI. I though it would give me the flexibility of adding as many editable entries as necessary. I even implemented a mockup in the UI that can be seen in previous screenshots of the Editor.

The problem that I found however was that both, the difference in font and text spacing, between the left and right panels of the UI made the layout look uneven. I knew I wanted something more symmetrical for what’s essentially the “home” view of the Editor, so I came up with a new design that can be seen in the image above.

Here, the Transform Panel consists in a custom UI component that is created dynamically and parented to the docked panel on the right. This provides a more uniform layout with two advantages: first, any number of property panels can be created and they will be nicely stacked one after the other in the panel (this will be useful in the future). Second, because the properties panel is detachable, it still allows the user to customize the Editor layout to her liking.

Now, one minor roadblock I’ve encountered from the engine standpoint for fully realizing this idea is the way Vortex currently represents Entity transformations.

All transformations in Vortex are represented as a 4×4 Matrix. The driving force behind this decision was to avoid having to convert between rotation representations at render time, thus saving some time down the line at during any scenegraph traversal passes.

So what does a generic transformation matrix look like in Vortex?

Matrices are a 4×4 list of values representing values in the homogeneous coordinate system:

  \begin{array}{cccc}  sr_{00} & r_{01} & r_{02} & t_{x} \\  r_{10} & sr_{11} & r_{12} & t_{y} \\  r_{20} & r_{21} & sr_{33} & t_{z} \\  0 & 0 & 0 & 1  \end{array}

 

In this matrix, (tx, ty, tz) correspond to the translation component of the Entity and we can easily extract this information to populate the Transformation Panel. But what happens with the rotation and scale?

Rotation and Scale are mixed in together in the matrix (represented by the overlapping sr components), so we can’t really extract the original scale and rotation that generated this matrix. This means that we will only be able to show and edit the position of the Entity and not its rotation or scale.

This is a limitation that has to be lifted.

The plan is to provide a higher-level contraption for describing transformations. Indeed a Transform class that will keep separate tabs for position, rotation and scale but will also provide a convenience method for computing on-demand the transform matrix that this transform represents.

A tentative interface for the Transform class could be:

namespace vtx {
class Transform
{
  public:
    void setPosition( float x, float y, float z );
    void setRotationEuler( float rx, float ry, float rz );
    void setScale( float sx, float sy, float sz );

    vtx::Matrix4 matrix() const; // used for the rendering pass

    // Other getter methods omitted
};
}

I think this will be a good change for the Engine. Working with Entities using a position-rotation-scale mindset instead of having to the deal with the cognitive overhead of thinking in terms of representing these as matrix operations will help users be more productive (and save precious keystrokes).

This coming week I will be working on finalizing this implementation and finally exposing full Entity transformation control through the UI. Stay tuned!

The GLSL Shader Editor

This week we take a break from work in the Vortex Editor to revisit an older personal project of mine: the GLSL Shader Editor, a custom editor for OpenGL shaders.

The UI of my custom shader editor.

The UI of my custom shader editor.

The idea of the editor was to allow very fast shader iteration times by providing an area where a shader program could be written and then, by simply pressing Cmd+B (Ctrl+B on Windows), the shader source would be complied and hot-loaded into the running application.

This concept of hot-loading allowed seeing the results of the new shading instantly, without having to stop the app and without even having to save the shader source files. This allowed for very fast turn-around times for experimenting with shader ideas.

As the image above shows, the UI was divided in two main areas: an Edit View and a Render View.

The Edit View consisted in a tabbed component with two text areas. These text areas (named “Vertex” and “Fragment”) are where you could write your custom vertex and fragment shaders respectively. The contents of these two would be the shader source that was be compiled and linked into a shader program.

The shader program would be compiled by pressing Cmd+B and, if no errors were found, then it would be hot-loaded and used to shade the model displayed in the Render View.

The status bar (showing “OK” in the image), would display any shader compilation errors as reported by the video driver.

The application had a number of built-in primitives and it also allowed importing in models in the OBJ format. It was developed on Ubuntu Linux and supported MS Windows and OS X on a number of different video cards.

Application Features

  • Built entirely in C++.
  • Supports Desktop OpenGL 2.0.
  • Qt GUI.
  • Supported platforms: (Ubuntu) Linux, MS Windows, OSX.
  • Diverse set of visual primitives and OBJ import support.
  • Very efficient turn-around times by hot-loading the shader dynamically – no need to save files!
  • GLSL syntax highlighting.
  • Docked, customizable UI.

Interestingly, this project was developed at around the same time that I got started with the Vortex Engine, therefore, it does not use any of Vortex’s code. This means that all shader compiling and loading, as well as all rendering was developed from scratch for this project.

I’ve added a project page for this application (under Personal Projects in the blog menu). I’ve also redesigned the menu to list all different personal projects that I’ve either worked on or that I’m currently working on, so please feel free to check it out.

Next week, we’ll be going back to the Vortex Editor! Stay tuned for more!

Designing the Editor Architecture

Last week I used the (very little) free time that I had to work on the internal architecture of the Editor and how it’s going to interact with the Vortex Engine.

In general terms, the plan is to have all UI interactions be well-defined and go to a Front Controller object that’s going to be responsible for driving the engine. This Front Controller, by definition, will be a one-stop shop for the entire implementation behind the UI and it will also, at a later stage, provide higher-granularity control of the engine.

Vortex Framebuffer Object Support: a knight is rendered on a texture that is then mapped on a cube. All rendering is done on the GPU, avoiding expensive copies to RAM.

Vortex Framebuffer Object Support: a knight is rendered on a texture that is then mapped on a cube. All rendering is done on the GPU, avoiding expensive copies to RAM.

Other components I’ve been designing include an undo/redo stack (which is super important for an editor application) and a scripting API. It’s still early for both these components, but I think it’s better if the design supports these from early on as opposed to trying to tack them on to the Editor at a later stage.

Finally, last week I took the time to bootstrap a higher OpenGL version on Windows. The Editor now has access to full OpenGL on this platform. This is a significant milestone that opens the door for bringing in to Windows Vortex’s advanced rendering techniques, such as FBO objects as depicted in the image above.

I’ve only got a short update for this week. Stay tuned for more to come : )

Vortex Editor .plan

Not too long ago, I started working on an Editor for the Vortex Engine. I have been toying with the idea for years and I finally decided to get started. Not only because it is going to be an interesting challenge, but also because I feel it’s a good way to improve the development workflow when using the engine.

A very early screenshot depicting a scene with a Box entity. No lighting, no mipmapping, no AA. “Crate” texture image courtesy of lighthouse3d.com

A very early screenshot depicting a scene with a Box entity. No lighting, no mipmapping, no AA. “Crate” texture image courtesy of lighthouse3d.com

Using the Engine Today

Let’s take a look at the way I can build an App today with the Vortex Engine. First, I would create a new Application (be it a Linux, Mac or iOS App). Then, I would link against the engine, and then finally, I would create a scene through the Vortex API manually.

Now, while this approach certainly works and even plays as one of Vortex’s strengths by allowing you to integrate the engine into any App without taking over the application loop, it does become cumbersome to create the scene programatically.

The reason is that this process usually amounts to repeating a series of steps for every scene in the App:

  1. Start by taking a first stab in the dark.
  2. Build and run the App.
  3. Realize that you want to change the scene layout.
  4. Go back to the code, change it.
  5. Rebuild and re-run the App.
  6. Repeat from step 3 until you’re satisfied with the results.

The idea of the Editor is to tackle this problem head-on. With the Editor, you will be able “see” the scene you are building, tweak it visually and then save it as a package that can be loaded by the engine.

Bringing Vortex to Windows

Starting a new project for the Editor begs the question of which platforms this App shall run on. The Editor will be a desktop App, so ideally, it would work on all three major desktop platforms: Windows, Linux and Mac.

Now, there is no point in making a new renderer for the Editor, as we want the scene we see in it to be as close as possible to what the final user Apps will render. What this means is that the Editor needs to run the engine.

Portability has always been one of the key tenets of the Vortex Engine, so this is the perfect opportunity to bring the engine to Windows, a platform it’s never run on before.

Bringing to Windows a codebase that was born on Linux and then expanded to support Mac and iOS is the ultimate test for source-code level portability. Once finished, the end result will be a flexible codebase that is also more adherent to the standard.

So far, the two main challenges in building the engine on Windows have been: adapting the codebase for building under the MSVC compiler and Windows’ barebones support for OpenGL.

Building on MSVC

Although Vortex is standard C++ and it builds with both GCC and Clang, building it with MSVC required a few changes here and there to conform better to its front end.

This also meant reconsidering some dependencies of the engine to allow for a non-POSIX environment. Thankfully, the move to C++11 has already helped replace some UNIX-specific functions with now-standard equivalents.

OpenGL on Windows

Regarding OpenGL support, the windowing toolkit I’ve chosen to implement the UI in has proven to be more of a problem than a solution. At the time of writing, and mostly because I’m trying to hit a high velocity building the Editor, I haven’t taken the time to bootstrap anything beyond OpenGL 1.1 support.

This would be a problem, however, Vortex’s Dual Pipeline support, as I first described in this post back from 2011, has proven essential by allowing the engine to scale down to OpenGL 1.1.

Dual Pipeline support: a Comparison of the Rendering Pipelines available in Vortex Engine. The image on the left represent the Fixed Pipeline. The image on the right represents the Programmable Pipeline.

Dual Pipeline support: a Comparison of the Rendering Pipelines available in Vortex Engine. The image on the left represent the Fixed Pipeline. The image on the right represents the Programmable Pipeline.

The plan is to move forward with the basic Editor functionality and then drop in the programmable pipeline renderer later in the game, retiring the fixed one.

It’s quite amazing to see the fixed pipeline renderer, written about 6 years ago, running unmodified on a completely new platform that it has never been tested on before. This is the true virtue of OpenGL.

In Closing

So far work is progressing nicely. As the image above shows, I have a simple proof-of-concept of the engine running inside the Editor skeleton under Windows. This is the foundation on which I will continue building the Editor App.

Stay tuned for more!

Conway’s Game of Life

This week we take a short break from 3D programming topics and go into gaming! Well, sort of…

A few weeks ago I published on my GitHub page a CUDA implementation of Conway’s Game of Life. The code is pretty simple, well in tune with the simplicity of the game.

The implementation can be found here: https://github.com/alesegovia/game-of-life.

If you are not familiar with the game, Conway’s Game of Life is a 0-player game where cells live and die on an infinite 2D grid. The life/death rules are the following, according to Wikipedia:

Every cell interacts with its eight neighbours, which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur:

  1. Any live cell with fewer than two live neighbours dies, as if caused by under-population.
  2. Any live cell with two or three live neighbours lives on to the next generation.
  3. Any live cell with more than three live neighbours dies, as if by overcrowding.
  4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

Conway’s game is excellent for implementing on a GPU, as it involves analyzing the cells in the 2D grid and, what’s best, each cell’s next state depends only on the previous state of its neighbors and never on the their current state.

This means that we can spawn a GPU thread for every single cell in the board and calculate the next state in parallel.

In the published implementation, the board size is 64×64 cells, so we are effectively spawning 4,096 GPU threads to solve every iteration. We do this for one million generations.

The project has been released under a GPLv3 license, so feel free to download, build it, run it, modify it and share it with others under its terms.

If you are looking for a fun weekend project, the game could definitely use an UI. I’ll give you extra points if you can draw it using OpenGL without ever having to copy the board back from GPU memory into system memory ;-)

Enjoy!