The slow road to persistence: encoding

This week I took on the large task of implementing a serialization scheme that allows saving the user’s scene onto their disk.

A scene containing a textured forktruck model.

A scene containing a textured forktruck model.

The problem at hand can divided in two major tasks:

  1. Serialize the scene by encoding its contents into some format.
  2. Deserialize the scene by decoding the format to recreate the original data.

I’ve chosen JSON as serialization format for representing the scene’s static data. That is, entities and components, along with all their properties and relationships are to be encoded into a JSON document that can then be used to create a perfect clone of the scene.

Why JSON? JSON is a well-known hierarchical file format that provides two good benefits: first, it’s easy for humans to read and hence debug. Second, it provides an uncanny 1:1 mapping to the concept of Entity and Component hierarchies.

It’s worth noting also that JSON tooling is excellent, providing the ability to quickly whip off a Python or Javascript utility that works on the file.

The downside is that reading and writing JSON is perhaps not as efficient as reading a custom binary format, however, I consider scene loading and saving an operation rare enough, most likely to be performed outside of the main game loop, so that it’s a feasible option at this time.

OK, so without further ado, let’s take a look at what a scene might look like once serialized. The following listing presents the JSON encoding of the scene depicted above. It was generated with the new vtx::JsonEncoder class.

{
   "entities" : [
      {
         "children" : [],
         "components" : [
            {
               "type" : 0
            },
            {
               "type" : 1
            }
         ],
         "name" : "2D Grid",
         "transform" : {
            "position" : {
               "x" : 0,
               "y" : -0.5,
               "z" : 0
            },
            "rotationEuler" : {
               "x" : 0,
               "y" : 0,
               "z" : 0
            },
            "scale" : {
               "x" : 1,
               "y" : 1,
               "z" : 1
            }
         }
      },
      {
         "children" : [],
         "components" : [
            {
               "type" : 0
            },
            {
               "type" : 1
            },
            {
               "type" : 100
            }
         ],
         "name" : "forktruck.md2",
         "transform" : {
            "position" : {
               "x" : 0,
               "y" : -0.5,
               "z" : 0
            },
            "rotationEuler" : {
               "x" : 4.7123889923095703,
               "y" : 0,
               "z" : 0
            },
            "scale" : {
               "x" : 0.0099999997764825821,
               "y" : 0.0099999997764825821,
               "z" : 0.0099999997764825821
            }
         }
      }
   ]
}

The document begins with a list of entities. Each entity contains a name, a transform, a list of children and a list of components. Notice how the transform is a composite object on its own, containing position, rotation and scale objects.

Let’s take a look at the encoded forktruck entity. We see its name has been stored, as well as its complete transform. In the future, when we decode this object, we will be able to create the entity automatically and place it exactly where it needs to be.

Now, you may have noticed that components look a little thin. Component serialization is still a work in progress and, at this time, I am only storing their types. As I continue to work on this feature, components will have all their properties stored as well.

On a more personal note, I don’t recall having worked with serialization/deserialization at this scale before. It’s definitely a challenge that’s already proven to be a large yet satisfying task. I am excited at the prospect of being able to save and transfer full scenes, perhaps even over the wire, and to a different device type.

The plan for next week is to continue to work on developing the serialization logic and getting started with the deserialization. Once this task is complete, we will be ready to move on to the next major task: the complete overhaul of the rendering system!

Stay tuned for more!

New Scale Tween Component for the Vortex Engine

This week was pretty packed, but I found some time to write the final addition to the native components for the vertical slice of the Editor. This time around, the new Scale Tween Component joins the Waypoint Tween and Rotation components to provide an efficient way to animate the scale of an entity.

With this new component, Vortex now supports out-of-the-box animation for all basic properties of an entity: position, rotation and scale.

The current lineup of built-in components for Vortex V3 is now as follows:

  • vtx::WaypointTweenComponent: continuously move an entity between 2 or more predefined positions in the 3D world.
  • vtx::RotationComponent: continuously rotate an entity on its Ox, Oy or Oz axis.
  • vtx::ScaleTweenComponent: continuously animate the scale of an entity to expand and contract.

The Scale Tween Component was implemented as a native plugin that taps directly into the Core C++ API of the Vortex Engine. This will allow leveraging the speed of native, optimized C++ code for animating an entity’s scale at a very low overhead.

In the future, once we have implemented scripting support, a script that desires to alter the scale will be able to just add a Scale Tween Component to the affected entity, configure the animation parameters (such as speed, scaling dimensions and animation amplitude) and rely on the Entity-Component System to perform the animation of the transformation automagically.

Of course, Scale Tween components can also be added statically to entities by means of the Vortex Editor UI.

Time permitting, next week we’ll finally be able to move on to entity hierarchy and component persistence. I want to roll this feature out in two phases: one where I first implement the load operation and then, once it’s been proven to be solid, I will then implement saving from the Editor UI.

There’s this and much more coming soon! Stay tuned for more!

Revisiting the Entity View

Work continued on the Vortex Editor this past week. One UI element that has always been somewhat of a functional placeholder and I’ve been meaning to revisit was the Entity View list.

Sitting on the left side of the UI, the Entity View shows all entities in a scene and provides a way to select them. As entities gained the ability to be parented to one another, however, it became apparent this view had to evolve to better express entity relationships.

The new Entity View in the Vortex Editor (shown on the left) displays parenting relationships between entities.

The new Entity View in the Vortex Editor (shown on the left) displays parenting relationships between entities.

A natural solution to this problem is to represent the entities in a hierarchical view (such as a tree). Not only does this convey which entities are parented to which, it allows better organizing the project, as empty entities can now be used to group together other entities that are related to a particular game/simulation mechanic.

The above screenshot shows the new view in action. This is a 3D model loaded from the Wavefront OBJ format, which allows for describing hierarchies of objects. The Vortex Engine’s OBJ loader recognizes these object groupings in the OBJ format and represents each as a separate entity. All of these are then parented to a single entity representing the 3D model in its entirety.

Overall I’m very satisfied with the results. There is still more work to be done, but I think that this is a valuable UX improvement that will go a long way.

Once this work is complete, the plan is to work in the ability to persist the entities and their configuration (including all attached components). I have some ideas on how to bring this idea about, but there are still some interesting problems that will have to be resolved.

Stay tuned for more!

Implementing a Waypoint System in the Vortex Engine

This week, we’re back to developing new native components for the Vortex Engine. For this week, the objective was to develop a “Waypoint Tween” component that moves an entity’s position between a series of points.

The new Waypoint Tween Component is used to move a 3D model between four points.

The new Waypoint Tween Component is used to move a 3D model between four points.

There are two main aspects to bringing the system to life: the component implementation in the Vortex Engine and the UI implementation in the Vortex Editor.

At the engine level, the system is implemented via a C++ component that is very fast at the time of performing the math necessary to interpolate point positions based on time and speed.

At the editor level, due to the flexibility of this system, exposing its properties actually required a significant amount of UI work. In this first iteration, points can be specified directly in the component properties of the inspector panel. Later in the game, the the plan is to allow the user to specify the points as actual entities in the world and then reference them.

Animation and Movement

Now, in the animated GIF above, it can be seen that the 3D model is not only moving between the specified points, but it also appears as if the model is running between these.

There are two factors at play here to implement this effect: the MD2 Animation and the Waypoint Tween.

The MD2 Animation and Waypoint Tween Components.

The MD2 Animation and Waypoint Tween Components.

When enabled, the Animate Orientation property of the Waypoint Tween component orients the 3D model so that it’s looking towards the direction of the point it’s moving to.

This propery is optional, as there are some cases where this could be undesirable, for instance, imagine wanting to implement a conveyor belt that moves boxes on top of it. It would look weird if boxes magically rotated on their Oy axis. For a character, on the other hand, it makes complete sense that the model be oriented towards the point it’s moving to.

Regarding the run animation, if you have been following our series on the Vortex Editor, you will remember that when instantiated by the engine, MD2 Models automatically include an MD2 Animation Component that handles everything related to animating the entity.

More details can be found in the post where we detail how MD2 support is implemented, but the idea is that we set the animation to looping “run”.

When we put it all together, we get an MD2 model that is playing a run animation as it patrols between different waypoints in the 3D world.

Waypoint System in Practice

So how can the waypoint system be used in practice? I envision two uses for the waypoint system.

The first one is for environment building. Under this scenario, the component system is used to animate objects in the background of the scene. Case in point, the conveyor belt system described above.

The second use, which might be a little more involved, would be to offload work from scripts. The efficient C++ implementation of the waypoint system would allow a component developed in a scripting language to have an entity move between different points without having to do the math calculations itself.

The dynamic nature of the component would allow this script to add and remove points, as well as interrupting the system at any time to perform other tasks. An example would be a monster that uses the waypoint system to patrol an area of the scene and then, when it’s detected that a player is close to the monster, the system is interrupted and a different system takes over, perhaps to attack the player.

In closing

I had a lot of fun implementing this system, as it brings a lot of options to the table in terms of visually building animated worlds for the Vortex Engine.

The plan for next week is to continue working on the Editor. There is some technical debt on the UI I want to address in order to improve the experience and there are also a couple of extra components I want to implement before moving on to other tasks.

As usual, stay tuned for more!

Building a Vortex Engine scene using Vortex Editor

This week I’ve been working in different several UX enhancements for the Editor to improve the scene building experience in general.

Building a scene for the Vortex Engine using the Vortex Editor.

Building a scene for the Vortex Engine using the Vortex Editor.

One thing to notice in the above image is the new horizontal grid on the floor. Each cell in the floor is exactly 1×1 world unit in size and it will help tremendously to keep a sense of scale and distance between objects when building the scene.

What I like about this image is how it shows the different components of the Editor coming together to make the Vortex Engine easier to use than ever. The more complete the Editor is, the quicker we can move on to start implementing the new rendering backend for Vortex, codename “V3”.

Stay tuned for more weekly updates!

Off-the-Shelf Native Components

At the end of last week’s article, we signed off mentioning the possibility of providing two component APIs for the Entity Component System of the Vortex Editor. Although I’m still very much in line with this idea, the more I lean towards a scripting language, the more I am concerned about performance.

I am worried that scripted components might not be able to meet the quota of updating in less that 16.6 ms once we have a few of them in any given scene. 16.6 ms is the time slice we have to update all components and render the scene on the screen in order to achieve 60 FPS.

A native component, on the other hand, could benefit from all the advantages of going through an optimizer compiler and requiring no parameter marshaling when interfacing with Vortex’s C++ API.

In the light of this, what I’ve been considering is providing a rich set of native components that are part of the Vortex API and that a script can attach to Entities. This would provide the advantage of being able to dynamically add and remove behavior to Entities, while still hitting native performance during the update-render loop.

Of course, I’m still very interested in providing a means to create components through scripts, but perhaps these can be used mostly for exploring ideas, and if performance becomes a problem, then the best practice would be to move to a native implementation of the component.

This week, as a dry-run of the Component API, I’ve implemented a simple Rotation Component that will make any Entity spin on its local axes. A custom UI widget in the Vortex Editor allows configuring the speed and rotation axis of the component in realtime, while visualizing the animation in the 3D view.

I’m pretty happy with the results, and I believe this is the way to move forward with the Entity Component System of the Engine.

Next week I will continue working on the Editor and Engine, trying to maximize what you can do out of the box. Stay tuned for more!

Entity Components in the Vortex-Engine

Last week, we got to a point were we can drag and drop an MD2 file into the 3D view and it automatically instantiate an Entity representing the 3D model. Now, because MD2 models support keyframe animation, suppose we wanted to somehow provide a way for this model to be correctly animated…

An example component driving an keyframe animation for a 3D model.

An example component driving an keyframe animation for a 3D model.

The Problem

One way to go about adding animations would be to implement the MD2 keyframe logic inside the Entity class, so that I need only send an update message to the Entity and it will update the vertex data it holds.

Because I can also instantiate Entities from static OBJ files, it doesn’t make much sense for this logic to be part of the Entity. So what could we do then?

Well, again, we could create a sub-class of Entity with the logic that we need for the animation. The problem with this approach is that sooner rather than later, we will have a huge hierarchy of classes where we will need to constantly refactor and add intermediate super-classes in order to have Entities that share some behaviors (but not others) in a way that enables code reuse.

The Entity-Component-System Model

Wouldn’t it be nice if, instead of extending and inheriting code, I could somehow cherry-pick the properties and functionality that I want for my Entities from a set of prebuilt components? Not only could we build a set of off-the-shelf tested and reusable components that we could mix and match, but also, we would be favoring composition over inheritance, effectively eliminating the need for sub-classing altogether!

This is the main idea behind the Entity-Component-System architecture of modern Game Engines, and it has been the main focus of the work in the Engine this week.

The brand new Component architecture allows adding properties (and even behavior!) dynamically to Entities in a flexible way that prevents coupling and encourages code reuse. The idea behind is simple enough: components that are added to entities will get regular update() calls (once per frame) that can then be used to affect the Entity they are attached to or the world around them.

In the image above, the “Knight” Entity has a Md2 Animation Component. This component has its update() function set so that it takes care of updating the Entity’s vertex data according to the animation it’s playing. It can also expose, by means of the Inspector, an UI that the user can access to set the currently playing animation and its properties.

Native vs Scriptable Components

At this time, as the new Component concept is introduced, the only component that the Engine provides out of the box is the Md2 Animation Component. The idea from this point on, however, is for all extra functionality that affects Entities to be implemented as components.

I envision that we will end up supporting two types of components in the future: native components (developed in C, C++ and Objective-C against the Vortex API) and scripted components.

If you have been paying attention to the Editor screenshots I’ve been uploading, you will have noticed that since day one we have had a “scripting” tab sitting alongside the 3D View tab. This is because the idea of exposing the Engine through a scripting interface is something that I’ve been interested in for a very long time. With the Component API taking shape, I think that allowing components to be developed through a scripting language is going to be a feasible option that will open the door for implementing tonnes of new features.

There is still a lot of work ahead for both the Editor and the Engine, but I’m sure that the next version of Vortex, “V3”, is going to be the most significant major update within the 6+ years of the history of the Engine. Stay tuned for more!

MD2 Entities

This week I’ve been working on revamping the old OBJ and MD2 importers to support the new Entity system. I originally wrote these loaders back in 2010 and, although the parser/loader code worked without any changes, I decided to revamp the external interfaces to make it easier to select the correct loader depending on the file type.

Importing a MD2 model with texture into the Vortex Editor.

Importing a MD2 model with texture into the Vortex Editor.

The image above shows how easy it is now to bring a Quake II MD2 model into the editor. We start by importing the files into the asset library. Then drag and drop the MD2 file from the library into the 3D world and -voila- a new Entity is created.

Using the properties panel, we can adjust the Entity’s transformation and set the texture file for the material. This process is definitely much simpler than than what it used to be, back when we had to load the model through code and then feed its vertex arrays into the GL.

The plan for next week is to wrap up MD2 support by implementing better control for the format’s animations, and then we will be off to new better things!

Stay tuned for more!

Bring in the Assets!

This week I worked on a way to bring in external assets into the Editor.

Importing an external image into the editor and texturing a cube.

Importing an external image into the editor and texturing a cube.

The way this works is by means of a new import tool in the UI. Import allows the user to select any set of files to copy into the project’s resource directory. Once in the project, resources are shown in the new Library View from where they can be accessed.

What can you do with the assets you bring into the project?

For 3D assets, when dragged onto the 3D view the Editor will instantiate a new entity. This new entity will appear in the entities list view and can be manipulated like any other entity.

For Image assets, these can be assigned to materials to change the appearance of entities in the scene. The figure above show how we can bring a JPG file into the Editor and use it to texture a box we create from the Entity menu.

Designing the Asset Library UX

I tried different designs for the library view, starting with a simple list that would show the asset’s name, type and preview, but I ultimately decided to simplify the interaction and settled for a tree view.

It was hard to throw away the initial tabular design I implemented, but I believe going with the tree view was ultimately the right choice.

Once I had the library view, the first thing I wanted to try was whether dragging assets from it onto the 3D world and other widgets would be feasible. I thought this feature would be troublesome to land, however, I was very pleased to see the implementation come together rather quickly.

Drag and drop is a great interaction because it prevents taking the user out of context when she wants to assign a value to a widget or other UI element.

Putting the picture together

If we take a step back, we can now start to see how these features are coming together to help bring the WYSIWYG experience to the Editor.

If you recall from our first post in the Vortex Editor series (click here to quickly jump there), one of the main objectives of this project was to allow simplifying the way we assemble scenes for the Vortex Engine.

Having now a Resource Library and a way to visually edit Entity properties, look at how much simpler it is to create a scene from scratch:

  1. First, we start the Editor, click File -> Import Assets.
  2. Then, we select the assets to bring into the project (for instance, a few obj files).
  3. After this, we simply drag the imported obj file from the Library onto the 3D view. This will create a new Entity.
  4. And Finally, we adjust the new Entity’s properties (position, rotation, scale) to our liking and drag image assets to texture them. That’s it!

The whole process is very visual, allowing the user to see the scene at all times and (hopefully) unleashing her artistic creativity by allowing quick improvisation and trial and error. It feels much more fun than having to be tweaking a series of float values intermixed with C++ code and recompiling every time.

Having reached this initial milestone, the plan for next week will be mostly polish work for these features, improving project resource management and further cleaning up the internal architecture.

Stay tuned for more!

Entity transform via the Properties Panel

This week I finished implementing full edition of Entity transformations by means of the properties panel.

Five boxes showcasing different transforms set via the properties panel.

Five boxes showcasing different transforms set via the properties panel.

In the above image, five box instances were dynamically created and then moved, rotated and scaled using nothing but the transform panel. This is a big milestone that will allow, from now on, assemble complete scenes to be rendered with the engine without having to tinker with the transformation values in code.

Now, a common piece of feedback that I’ve been receiving during this series is whether the editor is capable of rendering something other than boxes. Because the editor is powered by the full Vortex Engine, it can load and render a number of file formats out of the box.

With the ability to assemble scenes dynamically, we can now start bringing in external geometry in the form of entities and build our setup from them. This coming week I’m going to be working on better Entity support for external meshes and start bringing them into the editor.

Stay tuned for more!