The slow road to persistence: encoding

This week I took on the large task of implementing a serialization scheme that allows saving the user’s scene onto their disk.

A scene containing a textured forktruck model.
A scene containing a textured forktruck model.

The problem at hand can divided in two major tasks:

  1. Serialize the scene by encoding its contents into some format.
  2. Deserialize the scene by decoding the format to recreate the original data.

I’ve chosen JSON as serialization format for representing the scene’s static data. That is, entities and components, along with all their properties and relationships are to be encoded into a JSON document that can then be used to create a perfect clone of the scene.

Why JSON? JSON is a well-known hierarchical file format that provides two good benefits: first, it’s easy for humans to read and hence debug. Second, it provides an uncanny 1:1 mapping to the concept of Entity and Component hierarchies.

It’s worth noting also that JSON tooling is excellent, providing the ability to quickly whip off a Python or Javascript utility that works on the file.

The downside is that reading and writing JSON is perhaps not as efficient as reading a custom binary format, however, I consider scene loading and saving an operation rare enough, most likely to be performed outside of the main game loop, so that it’s a feasible option at this time.

OK, so without further ado, let’s take a look at what a scene might look like once serialized. The following listing presents the JSON encoding of the scene depicted above. It was generated with the new vtx::JsonEncoder class.

{
   "entities" : [
      {
         "children" : [],
         "components" : [
            {
               "type" : 0
            },
            {
               "type" : 1
            }
         ],
         "name" : "2D Grid",
         "transform" : {
            "position" : {
               "x" : 0,
               "y" : -0.5,
               "z" : 0
            },
            "rotationEuler" : {
               "x" : 0,
               "y" : 0,
               "z" : 0
            },
            "scale" : {
               "x" : 1,
               "y" : 1,
               "z" : 1
            }
         }
      },
      {
         "children" : [],
         "components" : [
            {
               "type" : 0
            },
            {
               "type" : 1
            },
            {
               "type" : 100
            }
         ],
         "name" : "forktruck.md2",
         "transform" : {
            "position" : {
               "x" : 0,
               "y" : -0.5,
               "z" : 0
            },
            "rotationEuler" : {
               "x" : 4.7123889923095703,
               "y" : 0,
               "z" : 0
            },
            "scale" : {
               "x" : 0.0099999997764825821,
               "y" : 0.0099999997764825821,
               "z" : 0.0099999997764825821
            }
         }
      }
   ]
}

The document begins with a list of entities. Each entity contains a name, a transform, a list of children and a list of components. Notice how the transform is a composite object on its own, containing position, rotation and scale objects.

Let’s take a look at the encoded forktruck entity. We see its name has been stored, as well as its complete transform. In the future, when we decode this object, we will be able to create the entity automatically and place it exactly where it needs to be.

Now, you may have noticed that components look a little thin. Component serialization is still a work in progress and, at this time, I am only storing their types. As I continue to work on this feature, components will have all their properties stored as well.

On a more personal note, I don’t recall having worked with serialization/deserialization at this scale before. It’s definitely a challenge that’s already proven to be a large yet satisfying task. I am excited at the prospect of being able to save and transfer full scenes, perhaps even over the wire, and to a different device type.

The plan for next week is to continue to work on developing the serialization logic and getting started with the deserialization. Once this task is complete, we will be ready to move on to the next major task: the complete overhaul of the rendering system!

Stay tuned for more!

New Scale Tween Component for the Vortex Engine

This week was pretty packed, but I found some time to write the final addition to the native components for the vertical slice of the Editor. This time around, the new Scale Tween Component joins the Waypoint Tween and Rotation components to provide an efficient way to animate the scale of an entity.

With this new component, Vortex now supports out-of-the-box animation for all basic properties of an entity: position, rotation and scale.

The current lineup of built-in components for Vortex V3 is now as follows:

  • vtx::WaypointTweenComponent: continuously move an entity between 2 or more predefined positions in the 3D world.
  • vtx::RotationComponent: continuously rotate an entity on its Ox, Oy or Oz axis.
  • vtx::ScaleTweenComponent: continuously animate the scale of an entity to expand and contract.

The Scale Tween Component was implemented as a native plugin that taps directly into the Core C++ API of the Vortex Engine. This will allow leveraging the speed of native, optimized C++ code for animating an entity’s scale at a very low overhead.

In the future, once we have implemented scripting support, a script that desires to alter the scale will be able to just add a Scale Tween Component to the affected entity, configure the animation parameters (such as speed, scaling dimensions and animation amplitude) and rely on the Entity-Component System to perform the animation of the transformation automagically.

Of course, Scale Tween components can also be added statically to entities by means of the Vortex Editor UI.

Time permitting, next week we’ll finally be able to move on to entity hierarchy and component persistence. I want to roll this feature out in two phases: one where I first implement the load operation and then, once it’s been proven to be solid, I will then implement saving from the Editor UI.

There’s this and much more coming soon! Stay tuned for more!

Revisiting the Entity View

Work continued on the Vortex Editor this past week. One UI element that has always been somewhat of a functional placeholder and I’ve been meaning to revisit was the Entity View list.

Sitting on the left side of the UI, the Entity View shows all entities in a scene and provides a way to select them. As entities gained the ability to be parented to one another, however, it became apparent this view had to evolve to better express entity relationships.

The new Entity View in the Vortex Editor (shown on the left) displays parenting relationships between entities.
The new Entity View in the Vortex Editor (shown on the left) displays parenting relationships between entities.

A natural solution to this problem is to represent the entities in a hierarchical view (such as a tree). Not only does this convey which entities are parented to which, it allows better organizing the project, as empty entities can now be used to group together other entities that are related to a particular game/simulation mechanic.

The above screenshot shows the new view in action. This is a 3D model loaded from the Wavefront OBJ format, which allows for describing hierarchies of objects. The Vortex Engine’s OBJ loader recognizes these object groupings in the OBJ format and represents each as a separate entity. All of these are then parented to a single entity representing the 3D model in its entirety.

Overall I’m very satisfied with the results. There is still more work to be done, but I think that this is a valuable UX improvement that will go a long way.

Once this work is complete, the plan is to work in the ability to persist the entities and their configuration (including all attached components). I have some ideas on how to bring this idea about, but there are still some interesting problems that will have to be resolved.

Stay tuned for more!

Excellent Intro to Qt Quick

I came across this video that provides a great introduction to the Qt Quick controls in Qt 5.

It’s very interesting to see how a fully fledged, cross platform app that consumes a Web API can be developed in just over 15 minutes almost without a single line of code.

After seeing this video, I’ve been looking a little more into how C++ can be integrated into Qt Quick apps and, unfortunately, it doesn’t seem to leverage the signal-slot mechanism common to QWidgets applications.

This is a problem, since it means that reusing a large codebase might be a little more involved than just doing a seamless transition from writing QWidgets to slowling rolling out side-by-side Qt Quick panels.

Nonetheless, it’s very impressive and it’s definitely worth taking a look at if you need to develop a quick desktop UI in 10~15 minutes.

Implementing a Waypoint System in the Vortex Engine

This week, we’re back to developing new native components for the Vortex Engine. For this week, the objective was to develop a “Waypoint Tween” component that moves an entity’s position between a series of points.

The new Waypoint Tween Component is used to move a 3D model between four points.
The new Waypoint Tween Component is used to move a 3D model between four points.

There are two main aspects to bringing the system to life: the component implementation in the Vortex Engine and the UI implementation in the Vortex Editor.

At the engine level, the system is implemented via a C++ component that is very fast at the time of performing the math necessary to interpolate point positions based on time and speed.

At the editor level, due to the flexibility of this system, exposing its properties actually required a significant amount of UI work. In this first iteration, points can be specified directly in the component properties of the inspector panel. Later in the game, the the plan is to allow the user to specify the points as actual entities in the world and then reference them.

Animation and Movement

Now, in the animated GIF above, it can be seen that the 3D model is not only moving between the specified points, but it also appears as if the model is running between these.

There are two factors at play here to implement this effect: the MD2 Animation and the Waypoint Tween.

The MD2 Animation and Waypoint Tween Components.
The MD2 Animation and Waypoint Tween Components.

When enabled, the Animate Orientation property of the Waypoint Tween component orients the 3D model so that it’s looking towards the direction of the point it’s moving to.

This propery is optional, as there are some cases where this could be undesirable, for instance, imagine wanting to implement a conveyor belt that moves boxes on top of it. It would look weird if boxes magically rotated on their Oy axis. For a character, on the other hand, it makes complete sense that the model be oriented towards the point it’s moving to.

Regarding the run animation, if you have been following our series on the Vortex Editor, you will remember that when instantiated by the engine, MD2 Models automatically include an MD2 Animation Component that handles everything related to animating the entity.

More details can be found in the post where we detail how MD2 support is implemented, but the idea is that we set the animation to looping “run”.

When we put it all together, we get an MD2 model that is playing a run animation as it patrols between different waypoints in the 3D world.

Waypoint System in Practice

So how can the waypoint system be used in practice? I envision two uses for the waypoint system.

The first one is for environment building. Under this scenario, the component system is used to animate objects in the background of the scene. Case in point, the conveyor belt system described above.

The second use, which might be a little more involved, would be to offload work from scripts. The efficient C++ implementation of the waypoint system would allow a component developed in a scripting language to have an entity move between different points without having to do the math calculations itself.

The dynamic nature of the component would allow this script to add and remove points, as well as interrupting the system at any time to perform other tasks. An example would be a monster that uses the waypoint system to patrol an area of the scene and then, when it’s detected that a player is close to the monster, the system is interrupted and a different system takes over, perhaps to attack the player.

In closing

I had a lot of fun implementing this system, as it brings a lot of options to the table in terms of visually building animated worlds for the Vortex Engine.

The plan for next week is to continue working on the Editor. There is some technical debt on the UI I want to address in order to improve the experience and there are also a couple of extra components I want to implement before moving on to other tasks.

As usual, stay tuned for more!

OpenGL from a 10,000ft view

This month marks 10 years since I started learning and using OpenGL, and what a ride has it been! I started off with basic OpenGL 1.1 back in the University under the advisory of my mentor and ex-Googler Gabriel Gambetta, then moving on to the programmable pipeline by teaching myself how to code shaders and then riding the wave of the mobile revolution with OpenGL ES on the iPhone.

OpenGL Logo. Copyright (C) Khronos Group.
OpenGL Logo. Copyright (C) Khronos Group.

As part of this process, I’ve also had the privilege of teaching OpenGL to others at one of the most important private universities back home. This exposed me to learn evermore about the API and improve my skills.

Rather than doing a retrospective post to commemorate the date, I though about doing something different. In this post I’m going to explain how OpenGL works from a 10,000ft view. I will lay the main concepts of how vertex and triangle data gets converted into pixels on the screen and, in the process, explain how a video card works. Let’s get started!

What is OpenGL

At the most basic level, OpenGL can be seen as a C API that allows a program to talk to the video driver and request operations or commands to be performed on the system’s video card.

Titan X GPU by NVIDIA. Image courtesy of TechPowerup.com
Titan X GPU by NVIDIA. Image courtesy of TechPowerup.com

So what is a video card? A video card (or GPU) is a special-purpose parallel computer, excellent at executing a list of instructions on multiple data at the same time. A video card has its own processors, its own memory and it’s good at performing one particular set of operations (namely, linear algebra) very very fast.

What OpenGL gives us is access to this device through a client/server metaphor where our program is the client that “uploads” commands and data to the video driver. The video driver, which is the server in this metaphor, buffers this data and, when the time is right, it will send it to the video card to execute it.

Our program’s “instance” inside the video driver is known as the OpenGL Context. The Context is our program’s counterpart in the video card and it holds all the data we’ve uploaded (including compiled shader programs) as well as a large set of global variables that control the graphics pipeline configuration. These variables comprise the OpenGL State and they’re the reason OpenGL is usually seen as a State Machine.

The Graphics Pipeline (Simplified)

Remember how I mentioned that the video card excels at performing a limited set of operations very very fast? Well, what are these operations?

The way the video card works is that data is fed into it through OpenGL and then it goes through a series of fixed steps that operate on it to generate, ultimately, a series of pixels. It is the job of the video card to determine which pixels should be painted and using which color. And that’s really all the video card does at the end of the day: paint pixels with colors.

The following image, taken from Wikipedia, shows a simplified view of the data processing pipeline that OpenGL defines.

A simplified view of the OpenGL pipeline. Source: Wikipedia
A simplified view of the OpenGL pipeline. Source: Wikipedia

In this image, imagine the data coming from the user program. This diagram shows what is happening to this data inside the video card.

Here:

  1. Per-Vertex Operations: are operations that are applied to the vertex data supplied. This data can be coming in hot from main system memory or be already stored in video card memory. Here is where vertices are transformed from the format they were originally specified in into something we can draw on the screen. The most common scenario here is to take a piece of geometry from Object Space (the coordinate system the artist used), place it in front of a virtual “camera”, apply a perspective projection and adjust its coordinates. In the old days, here is where all transformation and lighting would take place. Nowadays, when a shader program is active, this stage is implemented by the user’s Vertex Shader.
  2. Primitive Assembly: here’s where OpenGL actually assembles whatever vertex data we supplied into its basic primitives (Triangle, Triangle Strip, Triangle Fan, Lines or Points, among others). This is important for the next step.
  3. Rasterization: is the process of taking a primitive and generating discrete pixels. It amounts to, given a shape, determining what pixels said shape covers. If the conditions are right, texture memory can be sampled here to speed up the texturing process.
  4. Per-Fragment Operations: are operations performed on would-be pixels. If there is a shader program active, this is implemented by the user in the fragment shader. Texture mapping operations take place here, as well as (usually) shading and any other operations that the user can control. After this stage, a number of operations take place based on the State Machine. These operations include depth testing, alpha testing and blend operations.
  5. Framebuffer: finally, this is the image we are rendering our pixels to. It is normally the screen, but we can also define a texture or a Render Target object that we could then sample to implement more complex effects. Shadow Mapping is a great example of this.

Sample OpenGL Program

Having taken a (very quick) look at OpenGL, let’s see what a simple OpenGL program might look like.

We are going to draw a colored triangle on the screen using a very simple script that shows the basic interaction between our program and the video card through OpenGL.

A colored triangle drawn by a simple program exercising the OpenGL API.
A colored triangle drawn by a simple program exercising the OpenGL API.

I’m using Python because I find that its super simple syntax helps put the focus on OpenGL. OpenGL is a C API however and, in production code, when working with OpenGL, we tend to use C or C++. There are other bindings available for Java and C# as well, but -mind you- these just marshal the calls into C and invoke the API directly.

This script can be divided in roughly 3 parts: initializing the window and OpenGL context, declaring our data to feed to the video card and a simple event loop. Don’t worry, I’ll break it down in the next section.

#!/opt/local/bin/python2.6
import pygame
from OpenGL.GL import *

def main():
	# Boilerplate code to get a window with a valid
	# OpenGL Context
	w, h = 600, 600
	pygame.init()
	pygame.display.set_caption("Simple OpenGL Example")
	scr = pygame.display.set_mode((w,h), pygame.OPENGL|pygame.DOUBLEBUF)
	
	glClearColor(0.2, 0.2, 0.2, 0.0)

	# Data that we are going to feed to the video card:
	vertices = [ \
		-1.0, -1.0, 0.0, \
		1.0, -1.0, 0.0,  \
		0.0, 1.0, 0.0 ]

	colors = [ \
		1.0, 0.0, 0.0, 1.0, \
		0.0, 1.0, 0.0, 1.0, \
		0.0, 0.0, 1.0, 1.0 ]
			

	# Here's the game loop, all our program does is
	# draw to a buffer, then show that buffer to the
	# user and read her input.
	done = False
	while not done:
		# Clear the framebuffer
		glClear(GL_COLOR_BUFFER_BIT)
		
		# Supply the video driver a pointer to our
		# data to be drawn:

		glVertexPointer(3, GL_FLOAT, 0, vertices)
		glEnableClientState(GL_VERTEX_ARRAY)

		glColorPointer(4, GL_FLOAT, 0, colors)
		glEnableClientState(GL_COLOR_ARRAY)

		# Now that all data has been set, we tell
		# OpenGL to draw it, and which primitive
		# our data describes. This will be used
		# at the primitive assembly stage.
		glDrawArrays(GL_TRIANGLES, 0, 3)

		# Clean up
		glDisableClientState(GL_COLOR_ARRAY)
		glDisableClientState(GL_VERTEX_ARRAY)
		
		# Show the framebuffer
		pygame.display.flip()

		# Process input:
		for evt in pygame.event.get():
			if evt.type == pygame.QUIT:
				done = True

if __name__ == "__main__":
	main()

If you’re familiar with OpenGL, you’ll notice I’m using mostly OpenGL 1.1 here. I find it’s a simple way to show the basic idea of how data is fed into the video card. Production-grade OpenGL will no doubt prefer to buffer data on the GPU and leverage shaders and other advanced rendering techniques to efficiently render a scene composed of thousands of triangles.

Also note that the data is in Python list objects and, therefore, the pyopengl biding is doing a lot of work behind the scenes here to convert it into the float arrays we need to supply to the video card.

In production code we would never do this, however, doing anything more efficient would require to start fiddling with pointer syntax that would undoubtedly make the code harder to read.

Putting it all together

Now, if you’re unfamiliar with OpenGL code, let’s see how our program is handled by the Graphics Pipeline.

		# Supply the video driver a pointer to our
		# data to be drawn:

		glVertexPointer(3, GL_FLOAT, 0, vertices)
		glEnableClientState(GL_VERTEX_ARRAY)

		glColorPointer(4, GL_FLOAT, 0, colors)
		glEnableClientState(GL_COLOR_ARRAY)

We start off by providing an array of vertices and colors to OpenGL, as well as a description of how this data is to be interpreted. Our calls to glVertexPointer and glColorPointer (in real life you would use glVertexAttribPointer instead) tells OpenGL how our numbers are to be interpreted. In the case of the vertex array, we say that each vertex is composed by 3 floats.

glEnableClientState is a function that tells OpenGL that it’s safe to read from the supplied array at the time of drawing.

		# Now that all data has been set, we tell
		# OpenGL to draw it, and which primitive
		# our data describes. This will be used
		# at the primitive assembly stage.
		glDrawArrays(GL_TRIANGLES, 0, 3)

glDrawArrays is the actual function that tells OpenGL to draw, and what to draw. In this case, we are telling it to draw triangles out of the data we’ve supplied.

After this call, vertex data will go through the per-vertex operations stage and then be handed off to the primitive assembly, which will effectively interpret the vertices as forming part of one (or more) triangles.

Next, the rasterization stage will determine which pixels on the framebuffer would be covered by our triangle and emit these pixels, which will then go to the per-fragment operations stage. The rasterization stage is also responsible for interpolating vertex data over the triangle, this is why we get a color degrade effect spanning the area of the triangle – it’s simply the interpolation of the colors at the three vertices.

This is all happening inside the video card in parallel to our event loop, that’s why we have no source code here to show.

		# Show the framebuffer
		pygame.display.flip()

Finally, after everything is said and done, the video card writes the resulting pixels on the framebuffer, and we then make it visible to the user by flipping the buffers.

In Closing and Future Thoughts

We’ve barely scratched the surface of what OpenGL is and can do. OpenGL is a big API that has been around for 20+ years and has been adding lots of new features as video card and video games companies continue to push for ever more realistic graphics.

Now, while 20+ years of backwards compatibility allow running old code almost unmodified on modern systems, design decisions accumulated over time tend to obscure the optimal path to performance, as well as to impose restrictions on applications that would benefit for more direct control of the video card.

Vulkan logo. (tm) Khronos Group.
Vulkan logo. ™ Khronos Group.

These points, made by the Khronos group itself, have led to the design and development of a new graphics API standard called Vulkan. Vulkan is a break from the past that provides a slimmed down API more suitable for modern day hardware. In particular multi-threaded and mobile applications.

OpenGL, however, is not going away any time soon, and the plan for the Khronos group, at least for the time being, appears to be to offer both APIs side by side and let the developers choose the one more suitable to their problem at hand.

Additionally, with Apple focusing on Metal and Microsoft on DX12, OpenGL (in particular OpenGL ES 2.0) remains the only truly cross-platform API that can target almost every relevant device on the planet, be it an iPhone, an Android phone, a Windows PC, GNU/Linux or Mac.

Finally, the large body of knowledge surrounding 20+ years of OpenGL being around, coupled with OpenGL’s relative “simplicity” when compared to a lower-level API such as Vulkan, may make it a more interesting candidate for students learning their first hardware-accelerated 3D API.

As time marches on, OpenGL remains a strong contender, capable of pushing anything from AAA games (like Doom) to modern-day mobile 3D graphics and everything in-between. It is an API that has stood the test of time, and will continue to do so for many years to come.

Building a Vortex Engine scene using Vortex Editor

This week I’ve been working in different several UX enhancements for the Editor to improve the scene building experience in general.

Building a scene for the Vortex Engine using the Vortex Editor.
Building a scene for the Vortex Engine using the Vortex Editor.

One thing to notice in the above image is the new horizontal grid on the floor. Each cell in the floor is exactly 1×1 world unit in size and it will help tremendously to keep a sense of scale and distance between objects when building the scene.

What I like about this image is how it shows the different components of the Editor coming together to make the Vortex Engine easier to use than ever. The more complete the Editor is, the quicker we can move on to start implementing the new rendering backend for Vortex, codename “V3”.

Stay tuned for more weekly updates!

Off-the-Shelf Native Components

At the end of last week’s article, we signed off mentioning the possibility of providing two component APIs for the Entity Component System of the Vortex Editor. Although I’m still very much in line with this idea, the more I lean towards a scripting language, the more I am concerned about performance.

I am worried that scripted components might not be able to meet the quota of updating in less that 16.6 ms once we have a few of them in any given scene. 16.6 ms is the time slice we have to update all components and render the scene on the screen in order to achieve 60 FPS.

A native component, on the other hand, could benefit from all the advantages of going through an optimizer compiler and requiring no parameter marshaling when interfacing with Vortex’s C++ API.

In the light of this, what I’ve been considering is providing a rich set of native components that are part of the Vortex API and that a script can attach to Entities. This would provide the advantage of being able to dynamically add and remove behavior to Entities, while still hitting native performance during the update-render loop.

Of course, I’m still very interested in providing a means to create components through scripts, but perhaps these can be used mostly for exploring ideas, and if performance becomes a problem, then the best practice would be to move to a native implementation of the component.

This week, as a dry-run of the Component API, I’ve implemented a simple Rotation Component that will make any Entity spin on its local axes. A custom UI widget in the Vortex Editor allows configuring the speed and rotation axis of the component in realtime, while visualizing the animation in the 3D view.

I’m pretty happy with the results, and I believe this is the way to move forward with the Entity Component System of the Engine.

Next week I will continue working on the Editor and Engine, trying to maximize what you can do out of the box. Stay tuned for more!

Entity Components in the Vortex-Engine

Last week, we got to a point were we can drag and drop an MD2 file into the 3D view and it automatically instantiate an Entity representing the 3D model. Now, because MD2 models support keyframe animation, suppose we wanted to somehow provide a way for this model to be correctly animated…

An example component driving an keyframe animation for a 3D model.
An example component driving an keyframe animation for a 3D model.

The Problem

One way to go about adding animations would be to implement the MD2 keyframe logic inside the Entity class, so that I need only send an update message to the Entity and it will update the vertex data it holds.

Because I can also instantiate Entities from static OBJ files, it doesn’t make much sense for this logic to be part of the Entity. So what could we do then?

Well, again, we could create a sub-class of Entity with the logic that we need for the animation. The problem with this approach is that sooner rather than later, we will have a huge hierarchy of classes where we will need to constantly refactor and add intermediate super-classes in order to have Entities that share some behaviors (but not others) in a way that enables code reuse.

The Entity-Component-System Model

Wouldn’t it be nice if, instead of extending and inheriting code, I could somehow cherry-pick the properties and functionality that I want for my Entities from a set of prebuilt components? Not only could we build a set of off-the-shelf tested and reusable components that we could mix and match, but also, we would be favoring composition over inheritance, effectively eliminating the need for sub-classing altogether!

This is the main idea behind the Entity-Component-System architecture of modern Game Engines, and it has been the main focus of the work in the Engine this week.

The brand new Component architecture allows adding properties (and even behavior!) dynamically to Entities in a flexible way that prevents coupling and encourages code reuse. The idea behind is simple enough: components that are added to entities will get regular update() calls (once per frame) that can then be used to affect the Entity they are attached to or the world around them.

In the image above, the “Knight” Entity has a Md2 Animation Component. This component has its update() function set so that it takes care of updating the Entity’s vertex data according to the animation it’s playing. It can also expose, by means of the Inspector, an UI that the user can access to set the currently playing animation and its properties.

Native vs Scriptable Components

At this time, as the new Component concept is introduced, the only component that the Engine provides out of the box is the Md2 Animation Component. The idea from this point on, however, is for all extra functionality that affects Entities to be implemented as components.

I envision that we will end up supporting two types of components in the future: native components (developed in C, C++ and Objective-C against the Vortex API) and scripted components.

If you have been paying attention to the Editor screenshots I’ve been uploading, you will have noticed that since day one we have had a “scripting” tab sitting alongside the 3D View tab. This is because the idea of exposing the Engine through a scripting interface is something that I’ve been interested in for a very long time. With the Component API taking shape, I think that allowing components to be developed through a scripting language is going to be a feasible option that will open the door for implementing tonnes of new features.

There is still a lot of work ahead for both the Editor and the Engine, but I’m sure that the next version of Vortex, “V3”, is going to be the most significant major update within the 6+ years of the history of the Engine. Stay tuned for more!

MD2 Entities

This week I’ve been working on revamping the old OBJ and MD2 importers to support the new Entity system. I originally wrote these loaders back in 2010 and, although the parser/loader code worked without any changes, I decided to revamp the external interfaces to make it easier to select the correct loader depending on the file type.

Importing a MD2 model with texture into the Vortex Editor.
Importing a MD2 model with texture into the Vortex Editor.

The image above shows how easy it is now to bring a Quake II MD2 model into the editor. We start by importing the files into the asset library. Then drag and drop the MD2 file from the library into the 3D world and -voila- a new Entity is created.

Using the properties panel, we can adjust the Entity’s transformation and set the texture file for the material. This process is definitely much simpler than than what it used to be, back when we had to load the model through code and then feed its vertex arrays into the GL.

The plan for next week is to wrap up MD2 support by implementing better control for the format’s animations, and then we will be off to new better things!

Stay tuned for more!