To see the playground in action, please check out the video above. In the rest of this post, I’m going to break down how the project was built and how the script running the simulation works.
We first started by modeling the environment. Visuals are not a requirement for a pong-like game, as long as you have two paddles and a ball, but we also wanted to show how easy it is to have realtime dynamic lights in the scene.
Vortex Engine enables visually creating the layout of the world and objects inside it. For representing the ball, we used a simple sphere primitive, whereas the rest of the geometry was assembled from scaled cubes.
The Vortex Editor allows easily resizing the meshes into the appropriate shapes and creating materials that interact with the lights. We also used the editor to place the lights in the scene.
No matter how good we might be able to make the world look, it won’t do a lot without adding logic to it.
In Vortex, we use Lua for scripting. The Engine provides a runtime to load and execute scripts in the project and it exposes an API that can be used to run a simulation loop and handle events – This is really all that’s needed for this playground!
We don’t want to go overboard with our implementation, so we will keep everything simple by having 3 functions and a tiny self-contained vector math library.
The ball will bounce around, updated in our simulation function. We want to support having two players. For this, we will store the state of the keyboard as events arrive, and update the paddle positions also from our simulation loop.
Lua is a pretty great language and in under 200 lines of code we are able to build everything we need for this.
Initialization happens once at the beginning of the script execution and it is responsible for setting up the main camera, registering callbacks for handling on_frame and on_event messages and finding and caching entities of interest.
function main() -- register ourselves for the engine callbacks: table.insert( vtx.callbacks.on_frame, on_frame ) table.insert( vtx.callbacks.on_event, on_event ) -- set the main camera (this is also needed for events to be sent to us) local cam_entity = vtx.find_first_entity_by_name( "main_cam" ) local cam = cam_entity:first_component_of_type( TYPE_CAMERA ) vtx.rendering.set_main_camera( cam ) -- find entities of interest and cache their transforms: local ball = vtx.find_first_entity_by_name( "ball" ) ball_xform = ball:get_transform() local bsx, bsy, bsz = ball_xform:get_scale() ball_scale = bsx local paddle_left = vtx.find_first_entity_by_name( "paddle_left" ) paddle_left_xform = paddle_left:get_transform() local paddle_right = vtx.find_first_entity_by_name( "paddle_right" ) paddle_right_xform = paddle_right:get_transform() -- get and store the position and bounds of the world local world_container = vtx.find_first_entity_by_name( "world_container" ) world_container_xform = world_container:get_transform() local wx, wy, wz, ww = world_container_xform:get_position() world_center = vec2.new( wx, wy ) local wsx, wsy, wsz = world_container_xform:get_scale() world_size = vec2.new( wsx, wsy ) -- start the ball (we could randomize this) ball_dir = vec2.random_non_zero():normalize() -- find animated lights: local ball_light = vtx.find_first_entity_by_name( "ball_light" ) ball_light_xform = ball_light:get_transform() local paddle_left_light = vtx.find_first_entity_by_name( "paddle_left_light0" ) paddle_left_light_xform = paddle_left_light:get_transform() local paddle_right_light = vtx.find_first_entity_by_name( "paddle_right_light0" ) paddle_right_light_xform = paddle_right_light:get_transform() end
Notice how the first two lines add our script functions to the engine’s on_frame and on_event callback list. This is the key for building an interactive simulation.
Handling events is pretty simple. We will hold a global table that stores which keys are currently pressed down on the keyboard. We do not update any paddle positions here. These updates will be handled in the simulation function.
function on_event( evt ) if evt.type == EVT_TYPE_KEYDOWN then pressed_keys[ evt.key ] = true elseif evt.type == EVT_TYPE_KEYUP then pressed_keys[ evt.key ] = nil end end
The simulation is the most complicated function in this example. It has several responsibilities, including updating everything that is moving, detecting collisions against the world and the paddles, and resetting the game in case of a player scoring.
In the context of this example, we did not dig in deep into possible optimizations other than avoiding unnecessary table allocations. ALU (CPU time) could be saved by premultiplying radii and sizes by 0.5 as part of the initialization function.
In a real life example we would also want to break up this function into smaller ones with more clearly-defined responsibilities.
function on_frame( delta_t ) --update ball: local x,y,z,w = ball_xform:get_position() local next_x = x + ball_dir.x * delta_t * ball_speed local next_y = y + ball_dir.y * delta_t * ball_speed local bounce_right = next_x + ball_scale * 0.5 >= world_center.x + world_size.x * 0.5 local bounce_left = next_x - ball_scale * 0.5 <= world_center.x - world_size.x * 0.5 if bounce_left == false and bounce_right == false then x = next_x else -- the ball has hit one of the horizontal walls, check for scoring event: if bounce_right then local px, py, pz, pw = paddle_right_xform:get_position() local sx, sy, sz = paddle_right_xform:get_scale() if y <= py + sy * 0.5 and y >= py - sy * 0.5 then ball_dir.x = -ball_dir.x -- saved! else -- score! print( "Score Player 0!" ) ball_dir = vec2.random_non_zero():normalize() x = world_center.x y = world_center.y end elseif bounce_left then local px, py, pz, pw = paddle_left_xform:get_position() local sx, sy, sz = paddle_left_xform:get_scale() if y <= py + sy * 0.5 and y >= py - sy * 0.5 then ball_dir.x = -ball_dir.x -- saved! else -- score! print( "Score Player 1!") ball_dir = vec2.random_non_zero():normalize() x = world_center.x y = world_center.y end end end if next_y + ball_scale * 0.5 >= world_center.y + world_size.y * 0.5 or next_y - ball_scale * 0.5 <= world_center.y - world_size.y * 0.5 then ball_dir.y = -ball_dir.y else y = next_y end ball_xform:set_position( x, y, z, w ) ball_light_xform:set_position( x, y, z, w ) -- update paddles: local plx, ply, plz, plw = paddle_left_xform:get_position() if pressed_keys[ KEY_W ] ~= nil then ply = ply + delta_t * paddle_speed end if pressed_keys[ KEY_S ] ~= nil then ply = ply - delta_t * paddle_speed end paddle_left_xform:set_position( plx, ply, plz, plw ) paddle_left_light_xform:set_position( plx, ply, plz, plw ) local prx, pry, prz, prw = paddle_right_xform:get_position() if pressed_keys[ KEY_UP ] ~= nil then pry = pry + delta_t * paddle_speed end if pressed_keys[ KEY_DOWN ] ~= nil then pry = pry - delta_t * paddle_speed end paddle_right_xform:set_position( prx, pry, prz, prw ) paddle_right_light_xform:set_position( prx, pry, prz, prw ) end
One of the first things you will notice is that all the logic is simulated in 2D, despite this being a 3D world. There is no need to run the simulation in 3D, as we are only interested in what's happening on the plane where the game is being played.
Lines 3-13, 43-47: the way the ball simulation works is by calculating the position where it should be next based on its move vector and the time elapsed since the last update. Instead of updating the ball position immediately, however, we check if the new positions would make us collide against a wall, the floor or the ceiling.
Colliding with the floor and ceiling is trivial: we just mirror the y component of the move vector.
Lines 14-41: Horizontal collisions are move complex, as they require checking if we hit the paddles or not.
Lines 49-50: Assuming no player has scored, we send the updated positions down to the engine via the transform objects we previously cached.
Lines 53-71: Finally, we deal with updating the paddle positions based on what keys are currently pressed. This allows for a very smooth animation experience for the player, much more so than updating the positions directly from the keypress event.
This is really all there is to it. Once everything is set up, we can run the playground from the Editor directly or build it into a Vortex Archive and run it on any Vortex Runtime (any Runtime that has a keyboard attached that is).
If you haven't seen the video above, I recommend you take a look, as I mention a few concepts that I glanced over in this post.
I hope you guys found this post interesting and, as usual, stay tuned for more!
]]>Vortex has never had an event system before. When the time to design one came, we determined that we wanted a flexible and extensible system that allowed modeling different kinds of events generated on different platforms.
In order to make it extensible it was decided to go with an object-oriented design. A base Event class would provide the general interface to all events and specialized sub-classes could model specific events happening on the hardware, such as a keypress, a mouse move, or a finger swipe.
namespace vtx { class Event { public: virtual ~Event() { } virtual EventType type() const = 0; }; class KeyboardEvent : public Event { public: KeyboardEvent( short key ); short key() const; protected: short _key; }; class KeyPressedEvent : public KeyboardEvent { public: KeyPressedEvent( short key ); virtual EventType type() const override; }; // ...etc... }
With this simple interface in place, we can have platform-specific code wrap events in instances of these classes and pass them to the engine manager (a front controller). The engine manager takes care of moving data around so that, eventually, events make it to Lua scripts.
In order to receive events, scripts must add a handler function to the vtx.callbacks.on_event list. If you remember from this previous post on scripting, we were already using the vtx.callbacks namespace as a means for scripts to register a function to be called every frame. This mechanism extends on the idea.
When an event is passed to the engine, functions registered to the on_event list are called. The functions are called with a single argument: a table containing the event properties (such as which key was pressed for a keydown event).
The following example shows this interaction:
function on_event( evt ) print( "Received event: type: " .. evt.type .. " key: " .. evt.key ) if evt.type == 0 then x,y,z,w = box_xform:get_position() if evt.key == vtx.Event.KEY_RIGHT then x = x + 0.1 elseif evt.key == vtx.Event.KEY_LEFT then x = x - 0.1 elseif evt.key == vtx.Event.KEY_UP then y = y + 0.1 elseif evt.key == vtx.Event.KEY_DOWN then y = y - 0.1 end box_xform:set_position(x,y,z,w) end end -- Somewhere: table.insert( vtx.callbacks.on_event, on_event )
The above example could be used to move an entity in the scene in response to keyboard events. Of course, in practice keyboard event handling should be dependent on time elapsed, and we probably want to handle key autorepeat too, but for this example, it clearly shows how event data can be extracted from the parameter supplied to the callback.
Events are handled a bit differently from everything else in the engine so far.
Unlike other engine objects, we do not expect scripts (or other subsystems) to hold on to the engine object representing the instant event. The event should be handled when it occurs. By realizing this, we notice that there is no need to keep a native pointer to the event in the Lua table.
Instead, we can simplify by creating a table and copying the important information directly into it. This way we also avoid roundtripping into the engine to fetch all event properties.
Internally, at the engine level, the event object is transient. It lives in the stack and is destroyed automatically after it’s been dispatched to all its handlers.
It is important to note that when running in Editor, scripts must define a main camera in order for events to be delivered to the scripts.
This might sound counterintuitive, but it was a deliberate decision to preserve the fly-over cam controls in Editor if a script-driven camera was never defined. This is important for authoring a playground where we don’t want a script to take over the camera.
In the search of a perfect solution, I had been putting off adding event handling for a while. In the end, the solution devised solves our current needs in a simple and (hopefully) elegant fashion.
Now that we have a camera control API and event handling from scripts, we are in a good position for Vortex playgrounds to take over control and start exposing more elaborate functionality.
I’m excited for what’s to come. Stay tuned for more!
]]>In Vortex we decided to build our custom Lua binding. This initial work was done about two years ago, and it was built by directly invoking the Lua C API. This means that exposing something like the vtx::Transform class, along with a few methods, would look something like this:
void Bindings::registerBindings( lua_State* pLua ) { luaL_newmetatable( pLua, "vtx.Transform" ); lua_pushvalue( pLua, -1 ); lua_setfield( pLua, -2, "__index" ); lua_pushcfunction( pLua, nativeTransformGetPosition ); lua_setfield( pLua, -2, "get_position" ); lua_pushcfunction( pLua, nativeTransformSetPosition ); lua_setfield( pLua, -2, "set_position" ); lua_pushcfunction( pLua, nativeTransformGetScale ); lua_setfield( pLua, -2, "get_scale" ); }
Now, the problem with this approach is that it’s very verbose and also pretty error prone. Wouldn’t it be nice if we could have a Modern C++ way of exposing classes and their methods to Lua scripts that didn’t require all this typing?
What we want to do is easily define a class with a few methods and their callbacks into native code.
Leveraging C++11’s initializer lists, it turns out it’s pretty simple to do this. Have a look at the following simplified way of defining functions. It looks more readable and way more in line with Modern C++ in my opinion:
void Bindings::registerBindings( lua_State* pLua ) { vtx::BindingUtils::pushClass( pLua, "vtx.Transform", { { "get_position", nativeTransformGetPosition }, { "set_position", nativeTransformSetPosition }, { "get_scale", nativeTransformGetScale } } ); }
The pushClass method receives the name under which the class will be exposed to Lua scripts and a table of method names that map to the native implementation inside the engine. It also receives the Lua VM handle, of course :)
We want to handle the function list as a vector. A vector of a composite type that maps a string to an implementation. To do this, we start by defining a FunctionDefinition struct. This will “map” function names (provided as strings) to implementations (provided as typed function pointers).
namespace vtx { struct FunctionDefinition { std::string name; int ( *implementationPtr )( lua_State* l ); }; }
Now that we have a way of defining functions, we provide an implementation of the pushClass() function that can take a vector of this struct:
void vtx::BindingUtils::pushClass( lua_State* L, const char* className, std::vector< vtx::FunctionDefinition > memberFunctions ) { assert( L ); luaL_newmetatable( L, className ); lua_pushvalue( L, -1 ); lua_setfield( L, -1, "__index" ); for ( auto&& functionDef : memberFunctions ) { assert( functionDef.implementationPtr ); lua_pushcfunction( L, functionDef.implementationPtr ); lua_setfield( L, -2, functionDef.name.c_str() ); } }
Notice how the case of receiving an empty vector is gracefully handled. If this happens, an empty table will be defined. This could be useful to have a type with no members to be used as a handle to an engine object.
This method does work and I have already been using it to vastly simplify the binding code that exposes engine objects to scripts. This has made the client code much more readable and easier to extend.
This is, however, not necessarily the best way of doing this. Other authors have done an amazing job at solving this problem using excellent C++ template metaprogramming concepts. For our case, this simple implementation is enough to manage the complexity of the problems we have today.
Now it’s time to go build something with this! In my next post I’m going to be using this logic to build and expose a camera system to Lua scripts for them to be able to control and animate the point of view.
Stay tuned for more!
]]>Codebase after 2.5 years
April 11, 2016 marks the date where we decided to kick off Vortex V3 by means of building it a visual editor from scratch.
Back then progress was crazy fast, with every coding session adding lots of new features worth covering in full.
2.5 years later, feature work continues, but at a more steady pace. Making changes to systems requires careful engineering and testing to make sure existing features are not broken, and in some cases, a small fix might end up in a big refactor of a less-than-polished system.
This is normal and expected, but it has been a factor in the cadence of new material in the blog. Sorry about that :)
On the flip side, building atop a more complete system enables focusing on newer features and on more polish, since we are not constantly bootstrapping all the systems in the engine anymore. This leads us to…
Vertical Slice Complete!
As work on persistence and Vortex Archives became more complete, having expanded the renderer, having introduced support for Lua scripting, and a complete Editor-to-Device workflow, we have reached a point where I think we could consider the Vertical Slice of Vortex V3 complete!
This is a big milestone. We set off to building an Editor that could be used for visually assembling playgrounds, scripting them, serializing and deploying them in a wide array of platforms. This year we have finally achieved this goal.
There is something truly special about the possibility of opening up the engine for users to script via Lua.
As of now, the limit is no longer whatever was pre-built into the engine. We are at a point where we can power the user’s imagination with (many) limitations removed.
On that topic,
Lua Scripting
Lua has been an absolute joy to work with. The code is neat, self contained, very rich and portable. In the Engine we we able to easily wrap the VM and run it both in Editor and on iOS devices.
Exposing the engine’s API opens up tones of possibilities where we are no longer just building internal systems, but rather we’re publishing a “service” that scripts can leverage to simulate and change the 3D world.
Not many people know this, but Vortex has had very comprehensive terrain generation and spline evaluation logic for a number of years now. These features are completely buried, as they are more playground-level than engine-specific code. With Lua, we can now surface these to the user as pre-packaged, efficient, native facilities.
Editor Run and Edit Modes
As soon as we opened up the possibility of having scripts running in editor changing the world we knew we would need to have a way to revert the changes made by scripts to an unsaved scene. By leveraging the now complete persistence system, this task was very easy to achieve.
In editor, when you start running your Lua scripts, the Editor will behind the scenes serialize the scene into a temporary manifest that is persisted somewhere. You can pause or continue running your scripts as normal, with them altering the world.
As soon as the user decides to stop script execution, the scene will be restored from the persisted manifest and the VM reset.
There is more work that can be performed here to ensure better performance (shorter deserialization cycles), but the core concept is a very powerful one, as it allows freely testing scripts before packaging the playground for distribution.
This of course leads to,
Vortex Runtimes
This concept came out of left field, considering we set out to “only” building an editor and revamping the engine. Once the use case was identified however, it all but made sense.
Vortex runtimes are lightweight apps built on the engine that allow loading a Vortex Archive (generated via the editor) and running it on the target platform.
We started off by implementing an iOS runtime. It consists of a simple UI that allows selecting which archive to run (coming from a list of predefined playgrounds), choosing a rendering backend and loading and running it.
Runtimes allow us to bring playgrounds to any platform without having to port the entire editor over, nor requiring the user to build her custom C++ app that hosts the engine and without having to worry about how to draw it all.
What that means is,
All Things Rendering
On the topic of rendering, we have now reached a point where the renderer is starting to look acceptable enough to be able to produce richer visuals than ever before in the history of the engine (so expect to see more screenshots and videos on this blog!).
The plan for next year is to continue pushing in this direction, building on a solid foundation and adding more high-fidelity visual techniques.
In addition to this work, this year saw the addition of a new native Metal renderer to the engine. The renderer is simple, but 100% compatible with the inputs taken by the rendering system used on other platforms. Metal was also a joy to work with. Very modern API with very good design decisions.
In Closing
A big year for Vortex V3, with many efforts that started about 2 years ago starting to take shape.
We now have 3 major verticals including editor, scripting API, and graphics (with OpenGL and Metal). Two years ago we talked about how 2 engineers could work on this project full time. I think that, after this year, we could easily keep 3 engineers busy.
This was also a year where we started to see some tech debt rear its head, but we’ve been able to keep it under control. There are some major refactors that are going to be required for enhanced performance, but we are not at a point yet where I would prioritize that work over adding more features that directly improve a user’s experience.
We want to thank you for joining us throughout this year. We’re looking forward to what’s to come in 2019 and, as usual, stay tuned for more ;)
]]>First things, first
Last time we used the Vortex Editor to build and script a scene where a ball bounces inside a brick box. Let’s see if we can load it up in the iOS Runtime.
We have to start by loading our project into the Vortex Editor on Windows. Once open, we verify that it runs and that the simulation is working as expected. Once we have validated everything is correct, we can “build” the project into a Vortex Archive.
The process of archiving collects all the assets in the project folder and puts them into a binary format. This, of course, includes all Lua script files in the folder.
Moving to the Mac
Awesome, we have our playground archive. How do we put it on device? We will use the Vortex Runtime for iOS for this.
We will take the Vortex Archive and place it under the bundled resources for our App. In the future, we could implement a system that pulls playgrounds from Dropbox, Firebase or iTunes, but for now this is enough.
When running on device, the runtime will extract the archive and deserialize the playground. The unarchiver keeps track of all extracted resources, so all we have to do is scan this list for all Lua files and run them in sequence.
The order of execution is alphabetical, but this could easily be expanded in the future.
Running on Device
The Vortex Runtime is different from the Editor. We will probably never want to edit a playground on device directly – we just want to run it as soon as it’s fully loaded. That’s what we’re going to do!
Now, one concern that I had was the impact that running the Lua scripts might have on the device’s CPU usage, and whether vanilla Lua would be still enough or if I should plan on incorporating LuaJIT soon.
Xcode provides a great way to look into the Hardware as we test our Apps on it. As it can be seen in the image above, even with vanilla Lua, CPU usage stays at a maximum of 19%. This is a mere 1% increase from before (no scripting). Granted, the script being executed is pretty simple, but keep in mind this is all running at 60 FPS. I think I am going to keep it this way (for now).
Conclusion
We finally have a complete Editor-to-Runtime experience now where we can use our PC to visually create a playground, script it, simulate it, package it all together, and load it directly into the runtime.
I was very pleasantly surprised by the results of running the Bouncy Ball Lua script on device. It is a simple script but it shows just how polished the Lua VM is. After this test, we now know that we have enough headroom to build more involved 60 FPS simulations into our playgrounds. I’m excited about what’s to come next.
Stay tuned for more!
]]>Scripting is a very desirable feature for any engine. It allows adding (and modifying) logic on the fly, without having to recompile or relink any parts of the program. It makes iteration times super fast, enabling creativity.
In Vortex, we chose Lua for the scripting backend. We added initial support about a year ago. At that time, we decided to build a custom binding from scratch and we succeeded, but the work done was mostly proof of concept. This weekend, the objective was to expand this foundation so scripts could perform more useful tasks, such as inspecting and manipulating the world.
In order to achieve this, a number of changes were needed, both at the scripting level and at the editor level. In particular, we needed:
The video above shows all these concepts coming together to allow creating a simple simulation of a ball bouncing inside a 3D box. The ball has green a point light inside that moves around with it. This is mostly to show that this simulation is still running on the engine’s modern deferred renderer ;)
The Scripting Model
Key to the scripting model is the ability to talk to the engine from a loaded script and find objects in the scene. This allows the user to visually create worlds in the Vortex Editor and then find the important entities from scripts.
Scripts can also create their own entities of course, but for this example, we just wanted to pre-build the world visually.
For the bouncy ball example in the video above, we started off by creating the containing box, the ball object, and the lights in the scene. We used the Editor tools to create all materials and define the look of the entities and lighting.
But once we have our visual scene, how do we script it?
The entry point for scripts running in Vortex is the vtx namespace. Scripts hosted by Vortex automatically get access to a global table with entry points to the engine.
Functions in the vtx namespace are serviced directly from C++. This is a powerful abstraction that allows exposing virtually all engine functionality to a script.
This is exactly what we did. Through the vtx namespace, the bouncy_ball.lua script easily finds the ball, the walls, and the light. Once we have these objects we can get their transforms and register a function that will update them every frame.
Running Scripts
Once our script is ready, we can bring it into the scene directly from within the Editor.
Currently, loading any script will execute it. This runs all code at the file scope inside it. It’s important that scripts that want to respond to engine events register their callbacks at this point.
In order to run every frame, we are interested in the on_frame event inside the vtx.callbacks table. This table behaves essentially like a list. Once every frame, the engine will walk this list and call all functions registered there.
Pausing and Testing
Since the runloop is controlled directly by the engine, this gives the Editor enormous control over script execution. In particular, we can use the Editor to pause and even step scripts!
Coupled with the Editor’s REPL Lua console, this gives the user a lot of control. Through the Editor UI, the user can stop the scripts and inspect and change any Lua objects in realtime. No need to recompile the Editor or reload the scene or scripts.
Show me the Code!
Ok, we covered a lot of ground above. To help the concepts settle in, here’s the complete bouncy_ball.lua script used to build the simulation shown above. The main points of interest are the main and on_frame functions.
-- A ball bouncing inside a 3D box in Vortex Engine -- This script looks for the following entities in the scene: -- 1. box (the container) -- 2. ball (the bouncing ball) -- 3. ball_light (a light that is placed inside the ball) move_speed = 5.0 -- ball move speed function main() -- Find entities that we need and cache important -- transforms. ball = vtx.find_first_entity_by_name( "ball" ) ball_xform = ball:get_transform() ball_xform:set_position( 0, 3, 0, 1 ) ball_radius = ball_xform:get_scale() * 0.5 ball_light = vtx.find_first_entity_by_name( "ball_light" ) ball_light_xform = ball_light:get_transform() ball_light_xform:set_position( 0, 3, 0, 1 ) box = vtx.find_first_entity_by_name( "box" ) box_xform = box:get_transform() bx,by,bz = box_xform:get_position() bsx, bsy, bsz = box_xform:get_scale() box_scale = { bsx, bsy, bsz } move_dir = { 1.0, 0.75, 0.5 } -- could be randomized with a seed -- Add ourselves to the engine's scripting runloop: table.insert( vtx.callbacks.on_frame, on_frame ) end function on_frame( deltat ) -- Called every frame by the engine x,y,z,w = ball_xform:get_position() -- Arrays in Lua are 1-based: x = x + move_dir[1] * deltat * move_speed y = y + move_dir[2] * deltat * move_speed z = z + move_dir[3] * deltat * move_speed if x + ball_radius >= bx + box_scale[1] * 0.5 or x - ball_radius <= bx - box_scale[1] * 0.5 then move_dir[1] = -move_dir[1] end if y + ball_radius >= by + box_scale[2] * 0.5 or y - ball_radius <= by - box_scale[2] * 0.5 then move_dir[2] = -move_dir[2] end if z + ball_radius >= bz + box_scale[3] * 0.5 or z - ball_radius <= bz - box_scale[3] * 0.5 then move_dir[3] = -move_dir[3] end ball_xform:set_position( x, y, z, w ) ball_light_xform:set_position( x, y, z, w) end main()
The main function is responsible for finding all important entities in the scene and initializing the simulation. As mentioned before, it is run as soon as the script is loaded into the engine. Notice how the main function adds the on_frame function to the runloop.
The on_frame function runs every frame. It receives a time scale that can be used to implement a framerate-independent simulation.
It is worth noting that nothing in the on_frame function allocates memory. In particular, position components are passed into and pulled out of the engine in the Lua stack, with no heap allocations. This is important, as Lua has a Garbage-Collected runtime and we want to avoid collection pauses during the simulation.
Conclusion
It's been a lot of fun exploring hosting a scripting language inside the engine and manually building the binding between it and C++.
I think the ability of defining the visual appearance of the scene from the Editor and then allowing scripts to find entities at runtime was the right decision at this time. It's a simple model that solves the problem elegantly and can be performant if you cache things you need access often.
I am going to continue working on the binding further and seeing how far it can go. It's a good break from just working on the renderer all the time ;)
I'm definitely interested in your thoughts! Please share below and, as usual, stay tuned for more!
]]>When we set out to revamp the engine and build an editor for Vortex, we wanted to provide users of the engine with a way to intuitively assemble and tweak 3D worlds and then run them on the engine without the fuzz of having to rebuild the app every time.
The Vortex Editor moved us along that direction, allowing the user to visually build their world using point and click. The final missing part was the ability to build a self-contained package that wrapped all the project resources and could be distributed.
Enter the Vortex Archive (.vtx) files.
Vortex Archive files wrap all the resources necessary for the engine to load your created 3D world and run it. With this in place, we now have a full end-to-end experience where a world can be authored completely in the Vortex Editor, then packaged into a Vortex Archive, and ultimately run on any of the supported platforms.
Vortex Archive Format (.vtx)
In order to package the scene manifest and all referenced resources, I ended up designing a custom binary file format for the Archive. I used the extension .vtx, although I originally wanted to call these .var files (after Vortex ARchive). .var is however used for temporary files in UNIX systems so I didn’t want to clash with that convention.
The format in its initial version is pretty simple to read and write. The following table shows how the resources are stored inside the archive.
Size (bytes) | Contents |
---|---|
Archive Header | |
8 | Archive Version (currently 1.0) |
8 | Number of Entries |
Contents | |
8 | Resource Path String Length |
8 | Sub-Resource Identifier String Length |
8 | Data Lump Size |
varies | resource path |
varies | sub-resource path |
varies | raw file data |
… | … |
The contents section contains all the resources one after the other. The total number of stored resources is given by the archive header, under the “Number of Entries” field.
I could’ve added a “magic” number at the beginning, but all in all, this is a very simple format that binds everything together.
A Note on Compression
As part of the format definition process, I studied compressing each individual resource using zip. Ultimately, I discarded the idea.
Although zip compression would be beneficial for text resources (such as the scene manifest), at this time the vast majority of resources stored are (already) compressed image files. These are not expected to deflate significantly, so I couldn’t justify the increase in complexity at this time.
I might revisit this in the future as we expand scripting support and provide the ability to write custom shaders.
Vortex Runtime
With a complete workflow from Editor to Engine, it’s now possible to completely build a 3D world on desktop and deploy in any of the supported platforms: Windows, Mac, Linux-based OS’s and mobile devices.
Now, as easy as it is to add the engine to a project, there might be cases where we don’t want to write an app just to be able to run a 3D world. Cases where all we need is a thin layer that loads and runs our archive. For these cases, we’ve decided to add a third project into the mix: the Vortex Runtime.
The Runtime is a small frontend app to the engine that can load a Vortex Archive and play it. It’s a minimal, platform-specific app that wraps the underlying intricacies and provides a consistent interface on which Vortex Archives can be run.
Runtimes can be developed for any platform where the engine is available, enabling authored 3D worlds to be deployed virtually anywhere. An advanced user will probably still want to use C++ in order to stay in control, but for building simple playgrounds in Lua, the Runtime might be all that you need.
I think that this is a powerful concept – and a fun one to explore – being able to determine how much of the engine we can expose through the scripting API before you have to dip into native code.
Conclusion
It’s been a long ride since we initially set off to build a vertical of the Editor and revamp the Engine, but it has been worth it.
With these latest additions, we’ve now got a complete tool that we can grow horizontally. It is a starting point we can use to study the implementation of new rendering techniques, as well as further explore tech related to simulation, physics, compilers, scripting APIs and native platforms.
This moment is the culmination of a lot of hard work (during my free time), but it’s not the end. It is the beginning. Stay tuned for more to come!
]]>Point lights in a deferred renderer a bit more complicated to implement than directional lights. For directional lights, we can usually get away with drawing a fullscreen quad to calculate the light contribution to the scene. With point lights, we need to render a light volume for each light, calculating the light contribution for the intersecting meshes.
The following image is from one of the earlier tests I was conducting while implementing the lights. Here, I decided to render the light volumes as wireframe meshes for debugging purposes.
If you look closely, you can see how each light is contained to a sphere and only contributes to the portions of the scene it is intersecting. This is the great advantage of a deferred renderer when compared to a traditional forward renderer.
In a forward renderer, we would have had to draw the entire scene for each light. Only at the very end of the pipeline, would we realize that a point light contributed nothing to a fragment. At this point, however, we would have already performed all the operations in the fragment shader. In comparison, a deferred renderer only computes the subsection of the screen affected by each light volume. This allows for having very large numbers of realtime lights in a scene, with the total cost of having lots of lights on screen amounting to about just one big light.
Determining Light Intersections
One problem that arises when rendering point light volumes is determining the intersection with the scene geometry. There are different ways of solving this problem. I decided to base my approach on this classic presentation by NVIDIA.
The idea is to use the stencil buffer to cleverly test the light volumes against the z-buffer. In order for this to work, I had to do a pre-pass, rendering the back faces of the light volumes. During this pass, we update the stencil value only on z-fail. Z-fail means that we can’t see the back of our light volume because another mesh is there – exactly the intersection we’re looking for!
Once the stencil buffer pass is complete, we do a second pass of the light volumes, this time with the stencil test set to match the reference value (and z-testing disabled). The fragments where the test passes are lit by the light.
The image above shows the idea. In it, you can see how the light volume determines the fragments that the point light is affecting.
Screenshots
Here are some more screenshots of the technique.
In the following image, only the lion head had a bump map. For the rest of the meshes, we’re just using the geometric normal. Even as I was building this system, I was in awe at the incredible interaction of normal mapping with the deferred point lights. Take a look at the lion head (zoom in for more details), the results are astounding.
Here’s our old friend, the test cube, being lit by 3 RGB point lights.
I’m still playing with the overall light intensity scale (i.e. what does “full intensity” mean?). Lights are pretty dim in the Sponza scene, so I might bring them up across the board to be more like in the cube image.
Conclusion
Deferred rendering is definitely an interesting technique that brings a lot to the table. In recent years, it has become superseded by more modern techniques like Forward+, however, the results are undeniable – especially when combined with elaborate shading techniques such as normal mapping.
The next steps will be to implement spot light support and start implementing post processing techniques.
Stay tuned for more!
]]>
In the image above, we have a scene composed of several meshes, each with its own material, being affected by three directional lights. The lights have different directions and colors and the final image is a composition of all the color contributions coming from each light.
In order to make the most out of this functionality in the engine, I revamped the Light Component Inspector. It’s now possible to set the direction and color through the UI and see the results affect the scene immediately. You can see the new UI in the screenshot above.
Now, since lights are entities, I considered reusing the entity’s rotation as a way to rotate a predefined vector and thus defining the light. In the end, however, I decided against it. The main reason was that I think it is more clear to explicitly set the direction vector in the UI rather than having the user play with angles in their head to figure out an obscure internal vector. This way, you can specify the vector directly.
I’m pretty happy with the results. Internally, each light is computed individually and then all contributions are additive-blended onto the framebuffer. This means the cost of render n objects affected by m lights is going to be n + m draw calls. This is a big advantage over the forward rendering equivalent, which would require at least n * m draw calls.
Notably missing from the image above is color bleed. Photorealism is addictive: the more your approximate real life, the more you can tell when an image is synthetic if something is missing. This will be a topic for another time however.
Next week I want to make some additions to the material system to make it more powerful, as well as start implementing omnidirectional lights.
Stay tuned for more!
]]>Since the box was not the best primitive to test specular highlights, I added a sphere primitive to the engine. This primitive is also going to come in handy when we start adding point lights.
The sphere primitive includes normal and texture coordinate data, enabling shading from both the regular deferred light pass and the normal mapping pass. Both passes are shown in the image above. The difference is dramatic. Notice how normal mapping helps convey the illusion of a rough brick surface, whereas the “geometrical” normal just makes it look flat.
Please don’t mind the glitches in the GIF above. Overridding the mouse cursor makes the Screen To Gif tool I use act up a little bit. I promise these are not Editor bugs ;)
]]>This computer consists of a NVIDIA Tegra K1 System-on-Chip (SoC) that has a touch screen connected. It is housed in a custom-built enclosure that somewhat resembles an iMac and other “all in one” PCs out there.
What I find fascinating is how using off-the-shelf components, one can simply build an “appliance” that showcases 3D tech and/or art for a booth.
In this case, the machine is demoing a PBR renderer built for the Qt framework. Of course, I can envision this as an interesting showcase for the Vortex Engine as well. Because the renderer can scale from a dedicated desktop GPU all the way down to an iPhone, and it already supports Linux-based systems, it would be easy to build something similar. This would also give me the opportunity to play with some hardware in my free time : )
I’m excited about this project. I will start looking into it once the renderer is more mature.
]]>Here, I switch back and forth between the regular Geometry Pass shader and a Normal Mapping-aware shader. Notice how Normal mapping dramatically changes the appearance of the bricks, making them feel less part of a flat surface and more like a real, coarse surface.
I initially discussed Normal Mapping back in 2014, so I definitely recommend you check out that post for more details on how the technique works. The biggest difference in building Normal Mapping in Vortex V3 compared to Vortex 2.0 was implementing it on top of the new Deferred Renderer.
There is more work to be done in terms of Normal Mapping, such as adding specular mapping, but I’m happy with the results so far. Next week we continue working on graphics! Stay tuned for more!
]]>