Let’s Build Pong in Vortex Engine!

In our last post, we presented the new event system in Vortex and how it allows scripts to receive and react to input events. This week we want to put this system to the test by showing how an interactive playground could be implemented. Enter Vortex Pong!

To see the playground in action, please check out the video above. In the rest of this post, I’m going to break down how the project was built and how the script running the simulation works.

Visual Layout of the World

We first started by modeling the environment. Visuals are not a requirement for a pong-like game, as long as you have two paddles and a ball, but we also wanted to show how easy it is to have realtime dynamic lights in the scene.

Laying out the world for our pong-like simulation. Vortex Editor allows visually assembling our scene from simpler objects and placing the lights.

Laying out the world for our pong-like simulation. Vortex Editor allows visually assembling our scene from simpler objects and placing the lights.

Vortex Engine enables visually creating the layout of the world and objects inside it. For representing the ball, we used a simple sphere primitive, whereas the rest of the geometry was assembled from scaled cubes.

The Vortex Editor allows easily resizing the meshes into the appropriate shapes and creating materials that interact with the lights. We also used the editor to place the lights in the scene.


No matter how good we might be able to make the world look, it won’t do a lot without adding logic to it.

In Vortex, we use Lua for scripting. The Engine provides a runtime to load and execute scripts in the project and it exposes an API that can be used to run a simulation loop and handle events – This is really all that’s needed for this playground!

The Problem Space

We don’t want to go overboard with our implementation, so we will keep everything simple by having 3 functions and a tiny self-contained vector math library.

The ball will bounce around, updated in our simulation function. We want to support having two players. For this, we will store the state of the keyboard as events arrive, and update the paddle positions also from our simulation loop.

Lua is a pretty great language and in under 200 lines of code we are able to build everything we need for this.


Initialization happens once at the beginning of the script execution and it is responsible for setting up the main camera, registering callbacks for handling on_frame and on_event messages and finding and caching entities of interest.

function main()
    -- register ourselves for the engine callbacks:
    table.insert( vtx.callbacks.on_frame, on_frame )
    table.insert( vtx.callbacks.on_event, on_event )

    -- set the main camera (this is also needed for events to be sent to us)
    local cam_entity = vtx.find_first_entity_by_name( "main_cam" )
    local cam = cam_entity:first_component_of_type( TYPE_CAMERA )
    vtx.rendering.set_main_camera( cam )

    -- find entities of interest and cache their transforms:
    local ball = vtx.find_first_entity_by_name( "ball" )
    ball_xform = ball:get_transform()
    local bsx, bsy, bsz = ball_xform:get_scale()
    ball_scale = bsx

    local paddle_left = vtx.find_first_entity_by_name( "paddle_left" )
    paddle_left_xform = paddle_left:get_transform()

    local paddle_right = vtx.find_first_entity_by_name( "paddle_right" )
    paddle_right_xform = paddle_right:get_transform()

    -- get and store the position and bounds of the world
    local world_container = vtx.find_first_entity_by_name( "world_container" )
    world_container_xform = world_container:get_transform()
    local wx, wy, wz, ww = world_container_xform:get_position()
    world_center = vec2.new( wx, wy )
    local wsx, wsy, wsz = world_container_xform:get_scale()
    world_size = vec2.new( wsx, wsy )

    -- start the ball (we could randomize this)
    ball_dir = vec2.random_non_zero():normalize()

    -- find animated lights:
    local ball_light = vtx.find_first_entity_by_name( "ball_light" )
    ball_light_xform = ball_light:get_transform()

    local paddle_left_light = vtx.find_first_entity_by_name( "paddle_left_light0" )
    paddle_left_light_xform = paddle_left_light:get_transform()

    local paddle_right_light = vtx.find_first_entity_by_name( "paddle_right_light0" )
    paddle_right_light_xform = paddle_right_light:get_transform()


Notice how the first two lines add our script functions to the engine’s on_frame and on_event callback list. This is the key for building an interactive simulation.

Handling Events

Handling events is pretty simple. We will hold a global table that stores which keys are currently pressed down on the keyboard. We do not update any paddle positions here. These updates will be handled in the simulation function.

function on_event( evt )
    if evt.type == EVT_TYPE_KEYDOWN then
        pressed_keys[ evt.key ] = true
    elseif evt.type == EVT_TYPE_KEYUP then
        pressed_keys[ evt.key ] = nil

Simulation Loop

The simulation is the most complicated function in this example. It has several responsibilities, including updating everything that is moving, detecting collisions against the world and the paddles, and resetting the game in case of a player scoring.

In the context of this example, we did not dig in deep into possible optimizations other than avoiding unnecessary table allocations. ALU (CPU time) could be saved by premultiplying radii and sizes by 0.5 as part of the initialization function.

In a real life example we would also want to break up this function into smaller ones with more clearly-defined responsibilities.

function on_frame( delta_t )
    --update ball:
    local x,y,z,w = ball_xform:get_position()

    local next_x = x + ball_dir.x * delta_t * ball_speed
    local next_y = y + ball_dir.y * delta_t * ball_speed

    local bounce_right = next_x + ball_scale * 0.5 >= world_center.x + world_size.x * 0.5
    local bounce_left =  next_x - ball_scale * 0.5 <= world_center.x - world_size.x * 0.5

    if bounce_left == false and bounce_right == false then
        x = next_x
        -- the ball has hit one of the horizontal walls, check for scoring event:
        if bounce_right then
            local px, py, pz, pw = paddle_right_xform:get_position()
            local sx, sy, sz = paddle_right_xform:get_scale()
            if y <= py + sy * 0.5 and y >= py - sy * 0.5 then
                ball_dir.x = -ball_dir.x -- saved!
                -- score!
                print( "Score Player 0!" )
                ball_dir = vec2.random_non_zero():normalize()
                x = world_center.x
                y = world_center.y
        elseif bounce_left then
            local px, py, pz, pw = paddle_left_xform:get_position()
            local sx, sy, sz = paddle_left_xform:get_scale()
            if y <= py + sy * 0.5 and y >= py - sy * 0.5 then
                ball_dir.x = -ball_dir.x -- saved!
                -- score!
                print( "Score Player 1!")
                ball_dir = vec2.random_non_zero():normalize()
                x = world_center.x
                y = world_center.y

    if next_y + ball_scale * 0.5 >= world_center.y + world_size.y * 0.5 or next_y - ball_scale * 0.5 <= world_center.y - world_size.y * 0.5 then
        ball_dir.y = -ball_dir.y
        y = next_y

    ball_xform:set_position( x, y, z, w )
    ball_light_xform:set_position( x, y, z, w )

    -- update paddles:
    local plx, ply, plz, plw = paddle_left_xform:get_position()
    if pressed_keys[ KEY_W ] ~= nil then
        ply = ply + delta_t * paddle_speed
    if pressed_keys[ KEY_S ] ~= nil then
        ply = ply - delta_t * paddle_speed
    paddle_left_xform:set_position( plx, ply, plz, plw )
    paddle_left_light_xform:set_position( plx, ply, plz, plw )

    local prx, pry, prz, prw = paddle_right_xform:get_position()
    if pressed_keys[ KEY_UP ] ~= nil then
        pry = pry + delta_t * paddle_speed
    if pressed_keys[ KEY_DOWN ] ~= nil then
        pry = pry - delta_t * paddle_speed
    paddle_right_xform:set_position( prx, pry, prz, prw )
    paddle_right_light_xform:set_position( prx, pry, prz, prw )


One of the first things you will notice is that all the logic is simulated in 2D, despite this being a 3D world. There is no need to run the simulation in 3D, as we are only interested in what's happening on the plane where the game is being played.

Lines 3-13, 43-47: the way the ball simulation works is by calculating the position where it should be next based on its move vector and the time elapsed since the last update. Instead of updating the ball position immediately, however, we check if the new positions would make us collide against a wall, the floor or the ceiling.

Colliding with the floor and ceiling is trivial: we just mirror the y component of the move vector.

Lines 14-41: Horizontal collisions are move complex, as they require checking if we hit the paddles or not.

Lines 49-50: Assuming no player has scored, we send the updated positions down to the engine via the transform objects we previously cached.

Lines 53-71: Finally, we deal with updating the paddle positions based on what keys are currently pressed. This allows for a very smooth animation experience for the player, much more so than updating the positions directly from the keypress event.


This is really all there is to it. Once everything is set up, we can run the playground from the Editor directly or build it into a Vortex Archive and run it on any Vortex Runtime (any Runtime that has a keyboard attached that is).

If you haven't seen the video above, I recommend you take a look, as I mention a few concepts that I glanced over in this post.

I hope you guys found this post interesting and, as usual, stay tuned for more!

Developing the Scripting API

With the vertical slice complete and the 3D renderer in a good spot, this week I decided to shift focus to the scripting API of the engine.

Scripting is a very desirable feature for any engine. It allows adding (and modifying) logic on the fly, without having to recompile or relink any parts of the program. It makes iteration times super fast, enabling creativity.

In Vortex, we chose Lua for the scripting backend. We added initial support about a year ago. At that time, we decided to build a custom binding from scratch and we succeeded, but the work done was mostly proof of concept. This weekend, the objective was to expand this foundation so scripts could perform more useful tasks, such as inspecting and manipulating the world.

In order to achieve this, a number of changes were needed, both at the scripting level and at the editor level. In particular, we needed:

  • A way to wrap and expose entity transforms to Lua scripts.
  • A way to mutate these transforms.
  • A way for scripts to add themselves to the runloop and run logic every frame.
  • A way for the engine and editor to run (and “step”) scripts.
  • A way to hot reload scripts and rebooting VM when things went south.

The video above shows all these concepts coming together to allow creating a simple simulation of a ball bouncing inside a 3D box. The ball has green a point light inside that moves around with it. This is mostly to show that this simulation is still running on the engine’s modern deferred renderer ;)

The Scripting Model

Key to the scripting model is the ability to talk to the engine from a loaded script and find objects in the scene. This allows the user to visually create worlds in the Vortex Editor and then find the important entities from scripts.

Scripts can also create their own entities of course, but for this example, we just wanted to pre-build the world visually.

For the bouncy ball example in the video above, we started off by creating the containing box, the ball object, and the lights in the scene. We used the Editor tools to create all materials and define the look of the entities and lighting.

But once we have our visual scene, how do we script it?

The entry point for scripts running in Vortex is the vtx namespace. Scripts hosted by Vortex automatically get access to a global table with entry points to the engine.

Functions in the vtx namespace are serviced directly from C++. This is a powerful abstraction that allows exposing virtually all engine functionality to a script.

This is exactly what we did. Through the vtx namespace, the bouncy_ball.lua script easily finds the ball, the walls, and the light. Once we have these objects we can get their transforms and register a function that will update them every frame.

Running Scripts

Once our script is ready, we can bring it into the scene directly from within the Editor.

Currently, loading any script will execute it. This runs all code at the file scope inside it. It’s important that scripts that want to respond to engine events register their callbacks at this point.

In order to run every frame, we are interested in the on_frame event inside the vtx.callbacks table. This table behaves essentially like a list. Once every frame, the engine will walk this list and call all functions registered there.

Pausing and Testing

Since the runloop is controlled directly by the engine, this gives the Editor enormous control over script execution. In particular, we can use the Editor to pause and even step scripts!

Coupled with the Editor’s REPL Lua console, this gives the user a lot of control. Through the Editor UI, the user can stop the scripts and inspect and change any Lua objects in realtime. No need to recompile the Editor or reload the scene or scripts.

Show me the Code!

Ok, we covered a lot of ground above. To help the concepts settle in, here’s the complete bouncy_ball.lua script used to build the simulation shown above. The main points of interest are the main and on_frame functions.

-- A ball bouncing inside a 3D box in Vortex Engine
-- This script looks for the following entities in the scene:
-- 1. box (the container)
-- 2. ball (the bouncing ball)
-- 3. ball_light (a light that is placed inside the ball)

move_speed = 5.0 -- ball move speed

function main()
    -- Find entities that we need and cache important
    -- transforms. 
    ball = vtx.find_first_entity_by_name( "ball" )
    ball_xform = ball:get_transform()
    ball_xform:set_position( 0, 3, 0, 1 )
    ball_radius = ball_xform:get_scale() * 0.5

    ball_light = vtx.find_first_entity_by_name( "ball_light" )
    ball_light_xform = ball_light:get_transform()
    ball_light_xform:set_position( 0, 3, 0, 1 )

    box = vtx.find_first_entity_by_name( "box" )
    box_xform = box:get_transform()
    bx,by,bz = box_xform:get_position()
    bsx, bsy, bsz = box_xform:get_scale()
    box_scale = { bsx, bsy, bsz }

    move_dir = { 1.0, 0.75, 0.5 } -- could be randomized with a seed

    -- Add ourselves to the engine's scripting runloop:
    table.insert( vtx.callbacks.on_frame, on_frame )

function on_frame( deltat )
    -- Called every frame by the engine

    x,y,z,w = ball_xform:get_position()

    -- Arrays in Lua are 1-based:
    x = x + move_dir[1] * deltat * move_speed
    y = y + move_dir[2] * deltat * move_speed
    z = z + move_dir[3] * deltat * move_speed

    if x + ball_radius >= bx + box_scale[1] * 0.5 or x - ball_radius <= bx - box_scale[1] * 0.5 then
        move_dir[1] = -move_dir[1]

    if y + ball_radius >= by + box_scale[2] * 0.5 or y - ball_radius <= by - box_scale[2] * 0.5 then
        move_dir[2] = -move_dir[2]

    if z + ball_radius >= bz + box_scale[3] * 0.5 or z - ball_radius <= bz - box_scale[3] * 0.5 then
        move_dir[3] = -move_dir[3]

    ball_xform:set_position( x, y, z, w )
    ball_light_xform:set_position( x, y, z, w)



The main function is responsible for finding all important entities in the scene and initializing the simulation. As mentioned before, it is run as soon as the script is loaded into the engine. Notice how the main function adds the on_frame function to the runloop.

The on_frame function runs every frame. It receives a time scale that can be used to implement a framerate-independent simulation.

It is worth noting that nothing in the on_frame function allocates memory. In particular, position components are passed into and pulled out of the engine in the Lua stack, with no heap allocations. This is important, as Lua has a Garbage-Collected runtime and we want to avoid collection pauses during the simulation.


It's been a lot of fun exploring hosting a scripting language inside the engine and manually building the binding between it and C++.

I think the ability of defining the visual appearance of the scene from the Editor and then allowing scripts to find entities at runtime was the right decision at this time. It's a simple model that solves the problem elegantly and can be performant if you cache things you need access often.

I am going to continue working on the binding further and seeing how far it can go. It's a good break from just working on the renderer all the time ;)

I'm definitely interested in your thoughts! Please share below and, as usual, stay tuned for more!

Vortex Engine V3 and Editor Vertical Slice Complete!

Just a few days ahead of the 2018 SIGGRAPH conference, the last few pieces came into place to allow a complete experience from Editor to Runtime.

Vortex Editor and Runtime. Using a .vtx archive, 3D worlds built in the Editor can now be packaged with their resources and run on the Engine. In this image, the sponza scene as set up in Editor, is running on the Vortex Runtime for iOS.

Vortex Editor and Runtime. Using a .vtx archive, 3D worlds built in the Editor can now be packaged with their resources and run on the Engine. In this image, the sponza scene as set up in the Editor, is running on the Vortex Runtime for iOS.

When we set out to revamp the engine and build an editor for Vortex, we wanted to provide users of the engine with a way to intuitively assemble and tweak 3D worlds and then run them on the engine without the fuzz of having to rebuild the app every time.

The Vortex Editor moved us along that direction, allowing the user to visually build their world using point and click. The final missing part was the ability to build a self-contained package that wrapped all the project resources and could be distributed.

Enter the Vortex Archive (.vtx) files.

Vortex Archive files wrap all the resources necessary for the engine to load your created 3D world and run it. With this in place, we now have a full end-to-end experience where a world can be authored completely in the Vortex Editor, then packaged into a Vortex Archive, and ultimately run on any of the supported platforms.

Vortex Archive Format (.vtx)

In order to package the scene manifest and all referenced resources, I ended up designing a custom binary file format for the Archive. I used the extension .vtx, although I originally wanted to call these .var files (after Vortex ARchive). .var is however used for temporary files in UNIX systems so I didn’t want to clash with that convention.

The format in its initial version is pretty simple to read and write. The following table shows how the resources are stored inside the archive.

Size (bytes) Contents
Archive Header
8 Archive Version (currently 1.0)
8 Number of Entries
8 Resource Path String Length
8 Sub-Resource Identifier String Length
8 Data Lump Size
varies resource path
varies sub-resource path
varies raw file data

The contents section contains all the resources one after the other. The total number of stored resources is given by the archive header, under the “Number of Entries” field.

I could’ve added a “magic” number at the beginning, but all in all, this is a very simple format that binds everything together.

A Note on Compression

As part of the format definition process, I studied compressing each individual resource using zip. Ultimately, I discarded the idea.

Although zip compression would be beneficial for text resources (such as the scene manifest), at this time the vast majority of resources stored are (already) compressed image files. These are not expected to deflate significantly, so I couldn’t justify the increase in complexity at this time.

I might revisit this in the future as we expand scripting support and provide the ability to write custom shaders.

Vortex Runtime

With a complete workflow from Editor to Engine, it’s now possible to completely build a 3D world on desktop and deploy in any of the supported platforms: Windows, Mac, Linux-based OS’s and mobile devices.

Now, as easy as it is to add the engine to a project, there might be cases where we don’t want to write an app just to be able to run a 3D world. Cases where all we need is a thin layer that loads and runs our archive. For these cases, we’ve decided to add a third project into the mix: the Vortex Runtime.

The Runtime is a small frontend app to the engine that can load a Vortex Archive and play it. It’s a minimal, platform-specific app that wraps the underlying intricacies and provides a consistent interface on which Vortex Archives can be run.

Runtimes can be developed for any platform where the engine is available, enabling authored 3D worlds to be deployed virtually anywhere. An advanced user will probably still want to use C++ in order to stay in control, but for building simple playgrounds in Lua, the Runtime might be all that you need.

I think that this is a powerful concept – and a fun one to explore – being able to determine how much of the engine we can expose through the scripting API before you have to dip into native code.


It’s been a long ride since we initially set off to build a vertical of the Editor and revamp the Engine, but it has been worth it.

With these latest additions, we’ve now got a complete tool that we can grow horizontally. It is a starting point we can use to study the implementation of new rendering techniques, as well as further explore tech related to simulation, physics, compilers, scripting APIs and native platforms.

This moment is the culmination of a lot of hard work (during my free time), but it’s not the end. It is the beginning. Stay tuned for more to come!

Deferred Realtime Point Lights

It has been a while since my last update, but I’m excited to share significant progress with the new renderer. As of a couple of weeks ago, the new renderer has now support for realtime deferred point lights!

Point Lights in Vortex Engine 3.0's Deferred Renderer. Sponza scene Copyright (C) Crytek.

Point Lights in Vortex Engine 3.0’s Deferred Renderer. Sponza scene Copyright (C) Crytek.

Point lights in a deferred renderer a bit more complicated to implement than directional lights. For directional lights, we can usually get away with drawing a fullscreen quad to calculate the light contribution to the scene. With point lights, we need to render a light volume for each light, calculating the light contribution for the intersecting meshes.

The following image is from one of the earlier tests I was conducting while implementing the lights. Here, I decided to render the light volumes as wireframe meshes for debugging purposes.

Deferred Point Lights with their Volumes rendered as a wireframe.

Deferred Point Lights with their Volumes rendered as a wireframe.

If you look closely, you can see how each light is contained to a sphere and only contributes to the portions of the scene it is intersecting. This is the great advantage of a deferred renderer when compared to a traditional forward renderer.

In a forward renderer, we would have had to draw the entire scene for each light. Only at the very end of the pipeline, would we realize that a point light contributed nothing to a fragment. At this point, however, we would have already performed all the operations in the fragment shader. In comparison, a deferred renderer only computes the subsection of the screen affected by each light volume. This allows for having very large numbers of realtime lights in a scene, with the total cost of having lots of lights on screen amounting to about just one big light.

Determining Light Intersections

One problem that arises when rendering point light volumes is determining the intersection with the scene geometry. There are different ways of solving this problem. I decided to base my approach on this classic presentation by NVIDIA.

Light Volume Stencil Testing. We use the stencil buffer to determine which fragments are at the intersection of the light volume with a mesh.

Light Volume Stencil Testing. We use the stencil buffer to determine which fragments are at the intersection of the light volume with a mesh.

The idea is to use the stencil buffer to cleverly test the light volumes against the z-buffer. In order for this to work, I had to do a pre-pass, rendering the back faces of the light volumes. During this pass, we update the stencil value only on z-fail. Z-fail means that we can’t see the back of our light volume because another mesh is there – exactly the intersection we’re looking for!

Once the stencil buffer pass is complete, we do a second pass of the light volumes, this time with the stencil test set to match the reference value (and z-testing disabled). The fragments where the test passes are lit by the light.

The image above shows the idea. In it, you can see how the light volume determines the fragments that the point light is affecting.


Here are some more screenshots of the technique.

In the following image, only the lion head had a bump map. For the rest of the meshes, we’re just using the geometric normal. Even as I was building this system, I was in awe at the incredible interaction of normal mapping with the deferred point lights. Take a look at the lion head (zoom in for more details), the results are astounding.

Vortex Engine 3.0 - Deferred Point Lights interacting with normal mapped and non-normal mapped surfaces.

Vortex Engine 3.0 – Deferred Point Lights interacting with normal mapped and non-normal mapped surfaces.

Here’s our old friend, the test cube, being lit by 3 RGB point lights.

Vortex Engine 3.0 - Our trusty old friend, the test cube, being lit by 3 realtime deferred point lights.

Vortex Engine 3.0 – Our trusty old friend, the test cube, being lit by 3 realtime deferred point lights.

I’m still playing with the overall light intensity scale (i.e. what does “full intensity” mean?). Lights are pretty dim in the Sponza scene, so I might bring them up across the board to be more like in the cube image.


Deferred rendering is definitely an interesting technique that brings a lot to the table. In recent years, it has become superseded by more modern techniques like Forward+, however, the results are undeniable – especially when combined with elaborate shading techniques such as normal mapping.

The next steps will be to implement spot light support and start implementing post processing techniques.

Stay tuned for more!

Multiple Directional Realtime Lights

This week work went into supporting multiple directional realtime lights in the new Vortex V3 renderer.

Multiple Realtime Lights rendered by Vortex V3.

Multiple Realtime Lights rendered by Vortex V3.

In the image above, we have a scene composed of several meshes, each with its own material, being affected by three directional lights. The lights have different directions and colors and the final image is a composition of all the color contributions coming from each light.

In order to make the most out of this functionality in the engine, I revamped the Light Component Inspector. It’s now possible to set the direction and color through the UI and see the results affect the scene immediately. You can see the new UI in the screenshot above.

Now, since lights are entities, I considered reusing the entity’s rotation as a way to rotate a predefined vector and thus defining the light. In the end, however, I decided against it. The main reason was that I think it is more clear to explicitly set the direction vector in the UI rather than having the user play with angles in their head to figure out an obscure internal vector. This way, you can specify the vector directly.

I’m pretty happy with the results. Internally, each light is computed individually and then all contributions are additive-blended onto the framebuffer. This means the cost of render n objects affected by m lights is going to be n + m draw calls. This is a big advantage over the forward rendering equivalent, which would require at least n * m draw calls.

Notably missing from the image above is color bleed. Photorealism is addictive: the more your approximate real life, the more you can tell when an image is synthetic if something is missing. This will be a topic for another time however.

Next week I want to make some additions to the material system to make it more powerful, as well as start implementing omnidirectional lights.

Stay tuned for more!

Normal Mapping 2.0

This week we swtiched gears back into Rendering! A lot of work went into building Normal Mapping in the new Renderer. The following image shows the dramatic results:

Normal mapping in the new Deferred Renderer.

Normal mapping in the new Deferred Renderer.

Here, I switch back and forth between the regular Geometry Pass shader and a Normal Mapping-aware shader. Notice how Normal mapping dramatically changes the appearance of the bricks, making them feel less part of a flat surface and more like a real, coarse surface.

I initially discussed Normal Mapping back in 2014, so I definitely recommend you check out that post for more details on how the technique works. The biggest difference in building Normal Mapping in Vortex V3 compared to Vortex 2.0 was implementing it on top of the new Deferred Renderer.

There is more work to be done in terms of Normal Mapping, such as adding specular mapping, but I’m happy with the results so far. Next week we continue working on graphics! Stay tuned for more!

Putting it all together

This week has been a big one in terms of wrangling together several big pillars of the engine to provide wider functionality. The image below shows how we can now dynamically run an external Lua script that modifies the 3D world on the fly:

Vortex loading and running an external script that changes the texture of an entity's material.

Vortex loading and running an external script that changes the texture of an entity’s material.

In the image above, I’ve created two boxes. Both of these have different materials and each material references a different texture.

What you can see I’m doing is that I “mistakenly” drag from the Asset Library a character texture and assign it as the second box’s texture. Oh no! How can we fix this? It’s easy: just run an external script that will assign the first box’s texture to the second!

I’ve pasted the code of the script below:

function get_entity_material( entity )
	--get an entity's material
	local rendercomp = entity:first_component_of_type( RENDER_COMPONENT_TYPE )
	local material = rendercomp:get_material()
	return material;

ent0 = vtx.find_first_entity_by_name("box0")
mat0 = get_entity_material( ent0 )
tex0 = mat0:get_texture( "diffuseTex" )

ent1 = vtx.find_first_entity_by_name("box1")
mat1 = get_entity_material( ent1 )

mat1:set_texture( "diffuseTex", tex0 )


As you can see, the script is pretty straightforward. It finds the boxes, drills all the way down to their materials and then assigns the texture of the first box to the second. The changes are immediately seen in the 3D world.

It’s worth noting that all function calls into the vtx namespace and derived objects are actually jumping into C++. This script is therefore dynamically manipulating engine objects, that’s why we see its effects in the scene view.

The function names are still work in progress, and admittedly, I need to write more scripts to see if these feel comfortable or if they’re too long and therefore hard to remember. My idea is to make the scripting interface as simple to use as possible, so please if you have any suggestions I would love to hear your feedback! Feel free to leave a comment below!

Next week I will continue working on adding more functionality to the scripting API, as well as adding more features to the renderer! Stay tuned for more!

Component Introspection

This week, work on the scripting interface continued. As the image below shows, I can now access an entity’s components and even drill down to its Material through the Lua console.

Introspecting a Render Component to access its material via the Lua interface.

Introspecting a Render Component to access its material via the Lua interface.

The image above shows an example of the scripting interface for entities and components. Here, we are creating a new Entity from the builtin Box primitive and then finding its first component of type “1”. Type 1 is an alias to the Render Component of the Entity. It is responsible for binding a Mesh and a Material together.

Once we have the Render Component, we use it to access its Material property and print its memory address.

As the image shows, although Lua allows for very flexible duck typing, I am performing type checking behind the scenes to make scripting mistakes obvious to the user. Allow me to elaborate:

For the interface, I’ve decided that all components will be hidden behind the vtx.Component “class”. Now, this class will be responsible to exposing the interface to all native component methods, such as get_material(), set_mesh(), get_transform() and so forth.

The problem is, how do we prevent trying to access the material property of a Component that doesn’t have one, such as the Md2AnimationComponent? In my mind, there are two ways. I’m going to call them the “JavaScript” way and the “Python” way.

In the JavaScript way, we don’t really care. We allow calling any method on any component and silently fail when a mismatch is detected. We may return nil or “undefined”, but at no point are we raising an error.

In the Python way, we will perform a sanity check before actually invoking the function on the component, and actually halt the operation when an error is found. You can see that in the example above. Here we’re purposefully attempting to get the material of a Base Component (component type 0), which doesn’t have one. In this case, the Engine detects the inconsistency and raises an error.

I feel the Python way is the way to go to prevent subtle hard-to-debug errors arising from allowing any method to be called on any component and happily carrying on through to -hopefully- reach some sort of result.

A third alternative would have been to actually expose a separate “class” for every component type. This would certainly work, but I’m concerned about a potential “class explosion”, as we continue to add more and more components to the Engine. Furthermore, I feel strongly typed duck typing is a good approach, well in tune with the language philosophy, for a language like Lua.

Now that we can drill all the way down to an Entity’s material, it’s time to expand the interface to allow setting the shader and the material properties, allowing the script developer to control how entities are rendered by the Engine. Stay tuned for more!

Vortex Editor turns 1 year!

I hadn’t realized, but the Vortex Editor turned one year old a couple of months ago. I set off to work on this project in my free time with a clear set of objectives and it’s hard to believe that one year has already passed since the initial kick-off.

Of course, the Editor is closely tied to the Engine, which has seen its fair share of improvements through this year. From building an entirely new deferred renderer to completely replacing the node-based scene graph system to an Entity Component System model that is flexible and extensible, enhancements have been wide and deep.

This post is a short retrospective on the accomplishments of the Vortex Editor and Vortex Engine through this last year.

Vortex Engine Achievements in the last year:

  1. Kicked off the third iteration of the Vortex Engine, codename “V3”.
  2. Upgraded the Graphics API to Core OpenGL 3.3 on Desktop and OpenGL ES 3.0 on Mobile.
  3. Implemented Deferred Rendering from scratch using MRT. Establish the base for PBR rendering.
  4. New Entity Component System model, far more flexible than the old scene graph model and with support for Native and Script Components
  5. Overhaul of several internal engine facilities, such as the Material and Texture systems.
  6. Completely redesigned engine facilities such as Lights and Postprocessing.
  7. New Lua-powered engine scripting.
  8. Ported to Windows, fixing several cross-compiler issues along the way. The engine now builds with clang,
    GCC and MSVC and runs on Linux, Mac, Windows, iOS and Android.
  9. Started moving codebase to Modern C++ (C++11).

Vortex Editor Achievements in the last year:

  1. Successfully kicked off the project. Built from scratch in C++.
  2. Built a comprehensive, modular UI with a context-sensitive architecture that adjusts what you’re doing.
  3. Bootstrapped the project using Vortex Engine 2.0, then quickly moved to V3 once things were stable.
  4. Provide basic serialization/deserialization support for saving and loading projects.
  5. Implemented a Lua REPL that allows talking to the engine directly and script the Editor.
  6. Friendly Drag-and-Drop interface for instantiating new Entities.
  7. Complete visual control over an Entity’s position, orientation and scale in the world, as well as the configuration of its components.
  8. Allow dynamically adding new components to entities to change their behavior.

It has been quite a ride for the last year. I believe all these changes have successfully built and expanded upon the 5 years of work on Vortex 1.1 and 2.0. I’m excited about continuing to work on these projects to ultimately come up with a product that is fun to tinker with.

My objectives for this year two of the Editor include: implement scene and asset packaging, expanded scripting support and PBR rendering.

Stay tuned for more!

Building the Engine Scripting API

Last week when we left off, we were able to implement a Lua REPL in the Vortex Editor console. This week, I wanted to take things further by allowing Lua scripts to access the engine’s state, create new entities and modify their properties.

Scripting Interface to the Engine. A C++ cube entity is instantiated from Lua code that is evaluated on the fly in the console.

Scripting Interface to the Engine. A C++ cube entity is instantiated from Lua code that is evaluated on the fly in the console.

In order to get started, I added a simple single function: vtx_instantiate(). This function is available to Lua, but its actual implementation is provided in native code, in C++. The image above shows how we can use this function to add an entity to the scene from the console.

This simple example allows us to test two important concepts: first, that we can effectively call into C++ from Lua. Second, it shows that we are able to pass in parameters between the two languages. In this case, the single argument expected is a string that specifies which primitive or asset to instantiate.

With this in place, we can now move on to building a more intricate API that enables controlling any aspect of the scene, respond to user input and even implementing an elaborate world simulation.

Best of all, because the Lua VM is embedded into the engine, scripts built against the Vortex API will by definition be portable and run on any platform the engine runs on. This includes, of course, mobile devices.

The idea now is to continue to expand the engine API, developing a rich, easy to use set of functions. API design should prove an interesting exercise. Stay tuned for more!