Deferred Rendering – Part II: Handling Normals

This week work went into adding normal data to the G-Buffer and Light passes. I left normal data out while I was still implementing the architecture for the deferred renderer, so in the previous post we were only doing ambient lighting.

The Stanford Dragon 3D model rendered with a single directional light by the new deferred renderer.

The Stanford Dragon 3D model rendered with a single directional light by the new deferred renderer.

Adding normal data required a few tweaks to several components of the renderer.

First, I had to extend my Vertex Buffers to accommodate an extra 3 floating point values (the normal data). If you’ve been following these post series, you’ll remember from this post that we designed our Interleaved Arrays with the format XYZUVRGBA. I’ve now expanded this format to include per-vertex normals. The new packing is: XYZNNNUVRGBA, where NNN denotes the normal x, y and z values.

Next, the G-Buffer had to be extended to include a third render target: a normal texture. Here is where we stored interpolated normal data calculated during the Geometry pass.

Finally, we extend the Light pass shader to take in the normal texture and sample the per-fragment world-space normal and use it in its light calculation.

The test box renderer with a hard directional light with direction (-1,-1,0).

The test box renderer with a hard directional light with direction (-1,-1,0).

For testing, I used our usual textured box model and directional light with a rotating direction, as depicted above. The box is great for these kinds of tests, as its faces are parallel to the Ox, Oy and Oz planes with opposite value normals.

Soon enough, the test revealed a problem: faces with negative normals where not being lit. Drawing the normal data to the framebuffer during the light pass helped narrow down the problem tremendously. It turns out, normal data sampled from the normal texture was clamped to the [0-1] range. This meant that we could not represent normals with negative components.

Going through the OpenGL documentation for floating point textures revealed the problem. According to the docs, if the data type of a texture ends up being defined as fixed-point, sampled data will be clamped. The problem was caused by a single incorrect parameter in the glTexture2D calls used to create the render textures: the data type was being set to GL_UNSIGNED_BYTE for floating point textures.

The fix was simple enough: set the data type as GL_FLOAT for floating point textures, even if the internal format is already floating-point.

Our old forktruck scene, loaded into the Editor and rendered using the first iteration of the deferred renderer.

Our old forktruck scene, loaded into the Editor and rendered using the first iteration of the deferred renderer.

With this fix in place, I could confirm that we are able to light a surface from any direction from our simple Light pass. The image above shows our old warehouse scene loaded by Editor, and the forktruck is being lit by the deferred renderer!

There are now several paths where work can continue: we can extend our serialization format to include material data, we can continue improving the visuals or we can start testing our code on Mac. Stay tuned for more!

Postprocessing Underpinnings

These past few weeks the majority of work went into establishing the underpinnings for frame postprocessing in the new V3 renderer.

A postprocessing effect that renders the framebuffer contents in grayscale.

A postprocessing effect that renders the framebuffer contents in grayscale.

Vortex 2.0 was the first version of the engine to introduce support for custom shaders and, although this opened the door to implement postprocessing effects, the API was cumbersome to use this way.

In general, the process would boil down to having two separate scene graphs and manually controlling the render-to-texture process. This would spill many engine details to user programs and was prone to breaking if the engine changed. This would also mean engine users would have to write hundreds of lines of code.

With V3 I want to make render-to-texture the default render mode. This means the engine will never render directly into the default framebuffer but, rather, we always render to an FBO object that we can then postprocess.

Architecting the renderer this way provides the opportunity to implement a myriad of effects that will up the visuals significantly while also keeping the nitty gritty details hidden under the hood.

The image above shows a postprocessed scene where the framebuffer contents were desaturated while being blit onto the screen, producing a grayscale image.

For the upcoming weeks, work will focus on building upon this functionality to develop the components necessary to support more advanced rendering in V3. Stay tuned for more!

Wrapping up 2016: Lessons Learned from working on Vortex

2016 marks a year where a lot of work went into both the Vortex Engine and the Vortex Editor. Working on both these projects during my free time has been great to continue to hone my C++ and OpenGL skills. In this post, I am going to do a quick retrospective on the work done and present a few lessons learned.

Lessons on making such an UI-heavy application

Rendering a deserialized scene using the new renderer.

Rendering a deserialized scene using the new renderer.

The way I decided to approach this work was to divide Vortex into two separate projects: the Vortex Engine project and a brand new Editor built on top of it. The Engine itself has been on-going since 2010.

Today, both projects have reached a level of maturity where there we could clearly have two engineers working full-time on them.

The Editor is definitely one of the driving forces that pushes the Engine into supporting more and more visual features. This is to provide the user more power of expression. The amount of work required for exposing these features to the outside world, however, is something that I did not expect.

Let’s walk through a simple example: selecting an Entity and editing it so we can change its properties. In order to do this we must:

  1. Provide the user a means to select an Entity.
  2. React to this, introspecting the selected Entity and its components.
  3. Build and display a custom UI for each component in the Entity.
  4. For each component, render its UI widgets and preload them the component’s UI.
  5. Provide the user the means to change these properties.
  6. Have the changes reflect in the 3D scene immediately.

This system can quickly grow into thousands of lines of code. Even if the code does not have the strenuous performance requirements of the rendering loop, we still need to develop responsive code with a good architecture that allows building more features on top of it.

The rewards from this effort are huge, however. The Editor UI is the main point of interaction of the user with Vortex and it’s the way that she can tell the Engine what she wants it to do. Good UI and, more importantly, good UX, are key in making the Editor enjoyable to the user.

Lessons on going with C++11

C++ Logo

I decided to finally do the jump and update the codebase from C++98 to C++11 and the min-spec for running the renderer from OpenGL ES 2.0 to Core OpenGL 3.3.

Going to C++11 was the right choice. I find that C++11 allows for more power of expression when developing a large C++ codebase and it provides several utilities for making the code look more clean.

There are a few takeaways from using C++11, however, that I think may not be as clear for people just getting started with this version of the language.

Lessons on C++11 enum classes

I like enum classes a lot and I tend to use them as much as possible. There were several places through the legacy Vortex Engine code where old C-style structs and/or static const values that were used to declare configuration parameters did not look too clean. C++ enum classes helped wrap these while also keeping their enclosing namespace clean.

The only limitation I found was using enum classes for bitmasks. enum class members can definitely be cast to make them behave as expected. Doing this however is heavy-handing the type system and some may argue it does away with the advantages of having it.

Additionally, if you’re trying to implicitly cast a binary operator expression involving an enum class value into a bool, you are going to find a roadblock:

I like doing if (mask & kParameterFlag), as I find it more clear to read than having a mandatory comparison against zero at the end, and C++11 enum classes do not provide that option for me.

Lessons on C++11 Weak Pointers

C++11 Shared Pointers (std::shared_ptr and std::weak_ptr) are great for an application like the Editor, where Entity references (pointers) need to be passed around.

Imagine this situation: we have an Entity with a few components that is selected. We have several UI components that are holding pointers to all these objects. Now, if the user decides to delete this entity or remove one of its components, how do we prevent the UI code from dereferencing dangling pointers? We would have to hunt down all references and null them.

Using C++11’s std::weak_ptr, we know that a pointer to a destroyed Entity or Component will fail to lock. This means that the UI controllers can detect situations where they are pointing to deleted data and graciously handle it.

Lessons on C++11 Shared Pointers

Like any other C++ object, smart pointers passed by value will be copied. Unlike copying a raw pointer, however, copying a smart pointer is an expensive operation.

Smart pointers need to keep a reference count to know when it’s okay to delete the managed object and, additionally, C++ mandates that this bookkeeping be performed in a thread-safe fashion. This means that a mutex will have to be locked and unlocked for every smart pointer we create and destroy.

If not careful, your CPU cycles will be spent copying pointers around and you may have a hard time when you try to scale your Engine down to run on a mobile device.

I investigated this issue and found this amazing write up by Herb Sutter: https://herbsutter.com/2013/06/05/gotw-91-solution-smart-pointer-parameters/. The idea is to avoid passing shared pointers by copy to any function that does not intend to keep a long-term reference to the object.

Don’t be afraid of calling std::shared_ptr::get() for passing a pointer to a short, pure function that performs temporary work and only opt into smart pointers when you want to signal to the outside world that you want to share the ownership of the passed in pointer.

Lessons on using Core OpenGL

OpenGL Logo. Copyright (C) Khronos Group.

OpenGL Logo. Copyright (C) Khronos Group.

Choosing to go for a specific minimum version of Core OpenGL helps root out all the questions that pop up every time you use anything outside OpenGL 1.1 and wonder if you should implement the ARB / EXT variants as well.

Core OpenGL 3.3 makes it easier to discover the optimal GPU usage path, as you are now required to use VBOs, VAOs, Shaders and other modern Video Card constructs. It also has the added advantage that it will make it so that legacy OpenGL calls (which should not be used anyways) will not work.

Differences in OpenGL, however, are still pervasive enough so that code that is tested and verified to work on Windows may not work at all on OSX due to differences in the video driver. Moving the codebase to a mobile device will again prove a challenge.

The lesson here is to never assume that your OpenGL code works. Always test on all the platforms you claim to support.

In Closing

These were some of the most prominent lessons learned from working on Vortex this year. I am looking forward to continuing to work on these projects through 2017!

I think we’ve only scratched the surface of what the new renderer’s architecture can help build and I definitely want to continue developing the renderer to support more immersive experiences, as well as the Editor so it exposes even more features to the user!

Thank you for joining in though the year and, as usual, stay tuned for more!

Reaching Feature Parity

Last week work continued on different parts of the Editor. As the new renderer falls into place, I’ve been going through the Editor code, finding scaffolding and other legacy code pieces that were built to make the Editor originally work with the 2011 renderer, but that were disabled as part of the renderer update process.

Rendering a deserialized scene using the new renderer.

Rendering a deserialized scene using the new renderer.

As I was working on the Editor, I was careful to have a clearly defined separation between Editor code and Engine code. This helped keep the impact of swapping out the renderer for a new one limited to a single C++ file.

Changing the renderer, however, did bring a fair share of feature regressions to the Editor, as some semantics had changed in terms of where entities need to be registered to have them drawn and how the UI controllers inspect their components.

Today, after several fixes here and there, I’ve been able to load a scene that had been serialized using the old Editor and have its entities displayed in the 3D World. This is good confirmation that the Editor is reaching a stable point with feature parity to what we had a few months ago when we decided the rewrite the renderer.

Working on both the Editor and the Engine is and has been an amazing experience. In next week’s post, as a way to wrap up year, I’m going to break down a few of the lessons learned. Stay tuned for more!

New RenderCamera Work

This week I reimplemented the camera logic from scratch.

The previous camera controls in Vortex date from around 2010 and, even though it worked and helped release a number of Apps using it, it had some limitations that made it difficult to build new features on top of it.

I was interested, in particular, in providing a mechanism through which camera animations could be controlled from native code and scripts, rather than just responding to user input directly.

The new vtx::RenderCamera is implemented as a Component that can be attached to an Entity directly, basically providing the potential to make any entity in the scene a camera. Once the active camera is selected, this is passed down to the new Rendering System to draw the scene from the camera’s point of view.

This allows configuring a scene with several “vantage point” cameras in order to dynamically switch them around to make different cuts. At the same time, because cameras are attached to entities, entity movement provides a natural means to pan and rotate cameras through the scene.

I’m very happy with the result of this rework, as it provides exactly all the functionality we had before on top of a solid, expandable foundation.

Unfortunately, we still don’t have a way to draw a camera gizmo in the Editor, so we have no screenshot for this week. Next week, however, I’m planning on continue working on some visual features, so definitely stay tuned for more!

Shader Texture Mapping and Interleaved Arrays

Work continues on the new Rendering System for the Vortex Engine.

This week was all about implementing basic texture mapping using GL Core. The following image shows our familiar box, only this time, it’s being textured from a shader.

Texture Mapping via Interleaved Arrays and Shaders in the Vortex Engine.

Texture Mapping via Interleaved Arrays and Shaders in the Vortex Engine.

A number of changes had to go into the Renderer in order to perform texture mapping. These touched almost all the layers of the Engine and Editor.

  1. First, I wrote a “Single Texture” shader in GLSL to perform the perspective transform of the Entity’s mesh, interpolate its UV texture coordinates and sample a texture.
  2. Second, I had to change the way the retained mesh works in order to be able to send texture coordinate data to the video card (more on this later).
  3. Finally, I had to modify the Editor UI to allow selecting which shader is to be used when rendering an Entity.

So, regarding how to submit mesh texture coordinate data to the video card, because in OpenGL Core we use Vertex Buffer Objects (VBOs), it was clear that UVs had to be sent (and retained) in GPU memory too.

There are two ways to achieve this in OpenGL. One way consists in creating several VBOs so that each buffer holds an attribute of the vertex. Position data is stored in its own buffer, texture coordinates are stored their own buffer, per-vertex colors take a third buffer and so on and so forth.

There’s nothing wrong with doing things this way and it definitely works, however, there is one consideration to take into account: when we scatter our vertex data into several buffers, then every frame the GPU will have to collect all this data at render time as part of vertex processing.

I am personally not a big fan of this approach. I prefer interleaving the data myself once and then sending it to the video card in a way that’s already prepared for rendering. OpenGL is pretty flexible in this regard and it lets you interleave all the data you need in any format you may choose.

I’ve chosen to store position data first, then texture coordinate data and, finally, color data (XYZUVRGBA). The retained mesh class will be responsible for tightly packing vertex data into this format.

Once data is copied over to video memory, setting up the vertex attrib pointers can be a little tricky. This is where interleaved arrays become a more challenging than separate attribute buffers. An error here will cause the video card to read garbage memory and possibly segfault in the GPU. This is not a good idea. I’ve seen errors like this bring down entire operating systems and it’s not a pretty picture.

A sheet of paper will help calculate the byte offsets. It’s important to write down the logic and then manually test it using pen and paper. The video driver must never read outside the interleaved array.

Once ready, OpenGL will take care of feeding the vertex data into our shader inputs, where we will interpolate the texture coordinates and successfully sample the bound texture, as shown in the image above.

Now that we have a working foundation where we can develop custom shaders for any Entity in the scene it’s time to start cleaning up resource management. Stay tuned for more!

First Steps for the V3 Renderer

This week, work continued on the new V3 Renderer.

First steps of the V3 renderer.

First steps for the V3 renderer.

In the image above, we can see that the renderer is starting to produce some images.

It might not look like much at the moment and, indeed, there’s still work left to reach parity with the legacy fixed-pipeline renderer. Nonetheless, being able to render the floor grid and the PK Knight model is an important step that validates that the core Entity-Component traversal, the Core OpenGL rendering code, the shaders and the new material and retained mesh systems are interacting correctly.

As with any rendering project where you start from scratch, until the moment where the basic foundation comes together, you have no option but to rely on your code, a piece of paper and a bunch of scaffolding code. Everything has to be built “in the dark”, without being able to see anything on the screen.

Now that we’ve established this foundation, however, we can continue to build the V3 Renderer with visual feedback, which should help tremendously.

Next step: on to basic texture mapping!

Stay tuned for more!

Vortex V3 Renderer

The past couple of weeks have been super busy at work, but I’ve still managed to get the ball rolling with the new renderer for the Vortex Engine.

This is the first time in years I’ve decided to actually write a completely new renderer for Vortex. This new renderer is a complete clean-room implementation of the rendering logic, designed specifically for the new Entity-Component system in the engine.

Current Rendering Systems in Vortex

Let’s start by taking a look at the current rendering systems in the Vortex Engine. Ever since 2011, Vortex has had two rendering systems: a Fixed Pipeline rendering system and a Programmable Pipeline rendering system.

Dual Pipeline support: a Comparison of the Rendering Pipelines available in Vortex Engine. The image on the left represent the Fixed Pipeline. The image on the right represents the Programmable Pipeline.

Dual Pipeline support: a Comparison of the Rendering Pipelines available in Vortex Engine. The image on the left represent the Fixed Pipeline. The image on the right represents the Programmable Pipeline.

Both these rendering systems are pretty robust. Both have been used to develop and launch successful apps in the iOS App Store and they have proven to be reliable and portable, allowing the programmer to target Linux, Windows, Mac OS X and Android, as well as iOS.

The problem with these renderers is that they were designed with Vortex’s Scenegraph-based API in mind. This means that these renderers do not know anything about Entities or Components, but rather, they work at the Node level.

Moving forward, the direction for the Vortex Engine is to provide an Entity Component interface and move away from the Scenegraph-based API. This means that glue code has to be developed to allow the traditional renderers to draw the Entity-Component hierarchy.

So… why is this a problem?

Why a new Renderer?

As Vortex V3 now provides a brand new Entity-Component hierarchy for expressing scenes, glue code had to be developed in order to leverage the legacy renderers in the Vortex Editor. In the beginning this was not a major problem, however, as the Entity-Component system matures, it’s become ever more difficult to maintain compatibility with the legacy renderers.

PBR Materials in Unreal Engine 4. Image from ArtStation's Pinterest.

PBR Materials in Unreal Engine 4. Image from ArtStation’s Pinterest.

Another factor is the incredible pace at which the rendering practice has continued to develop in these past few years. Nowadays, almost all modern mobile devices have support for OpenGL ES 2.0 and even 3.0, and PBR rendering has gone from a distant possibility to a very real technique for mobile devices. Supporting PBR rendering on these legacy renderers would require a significant rewrite of their core logic.

Finally, from a codebase standpoint, both renderers were implemented more than 5 years ago, back when C++11 was just starting to get adopted and compiler support was very limited. This does not mean that the legacy renderers’ codebases are obsolete by any means, but by leveraging modern C++ techniques, they could be cleaned up significantly.

From all of this, it is clear that a new clean-room implementation of the renderer is needed.

Designing a New Renderer

The idea is for the new renderer to be able to work with the Entity-Component hierarchy natively without a translation layer. It should be able to traverse the hierarchy and determine exactly what needs to be rendered for the current frame.

Once the objects to be renderer have been determined, then, a new and much richer material interface would determine exactly how to draw each object according to its properties.

Just like with the Vortex 2.0 renderer, this new renderer should fully support programmable shaders, but through a simplified API that requires less coding and allows drawing much more interesting objects and visual effects.

Choosing a backing API

Choosing a rendering API used to be a simple decision: pick DirectX for Windows-only code or OpenGL (ES) for portable code. The landscape has changed significantly in the past few years, however, and there are now a plethora of APIs we can choose from to implement hardware-accelerated graphics.

This year alone, the Khronos Group released the Vulkan specification, a new API that tackles some of the problems inherent to OpenGL, as seen in the following image.

Comparison of the OpenGL and Vulkan APIs of the Khronos Group. Slide Copyright (C) 2016 Khronos Group.

Comparison of the OpenGL and Vulkan APIs of the Khronos Group. Slide Copyright (C) 2016 Khronos Group.

Now, both Vulkan and Metal are very appealing low-level APIs that provide a fine degree of control over the hardware, but they are limited in the sense that while Metal is Apple specific, Vulkan is cross-platform but not available on Apple devices.

DirectX 12 is Windows 10 only and that discards it right off the bat (for this project at least). DirectX 11 is a good option but, again, Windows only.

This leaves OpenGL and OpenGL ES as the two remaining options. I’ve decided to settle for Core OpenGL 3.3 at this time. I think it’s an API that exposes enough modern concepts to allow implementing a sophisticated renderer while also remaining fully compatible with Windows, iOS and everything in-between.

I don’t discard the possibility in the future of implementing a dedicated Metal or Vulkan backend for Vortex, and nothing in the Engine design should prevent this from happening, however, at this time, we have to start on a platform that’s available everywhere.

Using Core OpenGL 3.3 will also allow reusing the battle-tested shader API in Vortex. This component has several years of service under its belt and I’d risk to say that all of its bugs have been found and fixed.

Other than this particular component, I’m also reusing the material interface (but completely overhauling it) and developing a new RetainedMesh class for better handling mesh data streaming to the GPU.

Closing Thoughts

Writing a comprehensive renderer is no weekend task. A lot of components must be carefully designed and built to fit together. The room for error is minimal, and any problem in any component that touches anything related to the Video Card can potentially make the entire system fail.

It is, at the same time, one of the most satisfying tasks that I can think of as a software engineer. Once you see it come to life, it’s more than a sum of its parts: it’s a platform for rendering incredible dream worlds on a myriad of platforms.

I will take my time developing this new renderer, enjoying the process along the way.

Stay tuned for more! : )

The slow road to persistence: encoding

This week I took on the large task of implementing a serialization scheme that allows saving the user’s scene onto their disk.

A scene containing a textured forktruck model.

A scene containing a textured forktruck model.

The problem at hand can divided in two major tasks:

  1. Serialize the scene by encoding its contents into some format.
  2. Deserialize the scene by decoding the format to recreate the original data.

I’ve chosen JSON as serialization format for representing the scene’s static data. That is, entities and components, along with all their properties and relationships are to be encoded into a JSON document that can then be used to create a perfect clone of the scene.

Why JSON? JSON is a well-known hierarchical file format that provides two good benefits: first, it’s easy for humans to read and hence debug. Second, it provides an uncanny 1:1 mapping to the concept of Entity and Component hierarchies.

It’s worth noting also that JSON tooling is excellent, providing the ability to quickly whip off a Python or Javascript utility that works on the file.

The downside is that reading and writing JSON is perhaps not as efficient as reading a custom binary format, however, I consider scene loading and saving an operation rare enough, most likely to be performed outside of the main game loop, so that it’s a feasible option at this time.

OK, so without further ado, let’s take a look at what a scene might look like once serialized. The following listing presents the JSON encoding of the scene depicted above. It was generated with the new vtx::JsonEncoder class.

{
   "entities" : [
      {
         "children" : [],
         "components" : [
            {
               "type" : 0
            },
            {
               "type" : 1
            }
         ],
         "name" : "2D Grid",
         "transform" : {
            "position" : {
               "x" : 0,
               "y" : -0.5,
               "z" : 0
            },
            "rotationEuler" : {
               "x" : 0,
               "y" : 0,
               "z" : 0
            },
            "scale" : {
               "x" : 1,
               "y" : 1,
               "z" : 1
            }
         }
      },
      {
         "children" : [],
         "components" : [
            {
               "type" : 0
            },
            {
               "type" : 1
            },
            {
               "type" : 100
            }
         ],
         "name" : "forktruck.md2",
         "transform" : {
            "position" : {
               "x" : 0,
               "y" : -0.5,
               "z" : 0
            },
            "rotationEuler" : {
               "x" : 4.7123889923095703,
               "y" : 0,
               "z" : 0
            },
            "scale" : {
               "x" : 0.0099999997764825821,
               "y" : 0.0099999997764825821,
               "z" : 0.0099999997764825821
            }
         }
      }
   ]
}

The document begins with a list of entities. Each entity contains a name, a transform, a list of children and a list of components. Notice how the transform is a composite object on its own, containing position, rotation and scale objects.

Let’s take a look at the encoded forktruck entity. We see its name has been stored, as well as its complete transform. In the future, when we decode this object, we will be able to create the entity automatically and place it exactly where it needs to be.

Now, you may have noticed that components look a little thin. Component serialization is still a work in progress and, at this time, I am only storing their types. As I continue to work on this feature, components will have all their properties stored as well.

On a more personal note, I don’t recall having worked with serialization/deserialization at this scale before. It’s definitely a challenge that’s already proven to be a large yet satisfying task. I am excited at the prospect of being able to save and transfer full scenes, perhaps even over the wire, and to a different device type.

The plan for next week is to continue to work on developing the serialization logic and getting started with the deserialization. Once this task is complete, we will be ready to move on to the next major task: the complete overhaul of the rendering system!

Stay tuned for more!

New Scale Tween Component for the Vortex Engine

This week was pretty packed, but I found some time to write the final addition to the native components for the vertical slice of the Editor. This time around, the new Scale Tween Component joins the Waypoint Tween and Rotation components to provide an efficient way to animate the scale of an entity.

With this new component, Vortex now supports out-of-the-box animation for all basic properties of an entity: position, rotation and scale.

The current lineup of built-in components for Vortex V3 is now as follows:

  • vtx::WaypointTweenComponent: continuously move an entity between 2 or more predefined positions in the 3D world.
  • vtx::RotationComponent: continuously rotate an entity on its Ox, Oy or Oz axis.
  • vtx::ScaleTweenComponent: continuously animate the scale of an entity to expand and contract.

The Scale Tween Component was implemented as a native plugin that taps directly into the Core C++ API of the Vortex Engine. This will allow leveraging the speed of native, optimized C++ code for animating an entity’s scale at a very low overhead.

In the future, once we have implemented scripting support, a script that desires to alter the scale will be able to just add a Scale Tween Component to the affected entity, configure the animation parameters (such as speed, scaling dimensions and animation amplitude) and rely on the Entity-Component System to perform the animation of the transformation automagically.

Of course, Scale Tween components can also be added statically to entities by means of the Vortex Editor UI.

Time permitting, next week we’ll finally be able to move on to entity hierarchy and component persistence. I want to roll this feature out in two phases: one where I first implement the load operation and then, once it’s been proven to be solid, I will then implement saving from the Editor UI.

There’s this and much more coming soon! Stay tuned for more!