Developing the Scripting API

With the vertical slice complete and the 3D renderer in a good spot, this week I decided to shift focus to the scripting API of the engine.

Scripting is a very desirable feature for any engine. It allows adding (and modifying) logic on the fly, without having to recompile or relink any parts of the program. It makes iteration times super fast, enabling creativity.

In Vortex, we chose Lua for the scripting backend. We added initial support about a year ago. At that time, we decided to build a custom binding from scratch and we succeeded, but the work done was mostly proof of concept. This weekend, the objective was to expand this foundation so scripts could perform more useful tasks, such as inspecting and manipulating the world.

In order to achieve this, a number of changes were needed, both at the scripting level and at the editor level. In particular, we needed:

  • A way to wrap and expose entity transforms to Lua scripts.
  • A way to mutate these transforms.
  • A way for scripts to add themselves to the runloop and run logic every frame.
  • A way for the engine and editor to run (and “step”) scripts.
  • A way to hot reload scripts and rebooting VM when things went south.

The video above shows all these concepts coming together to allow creating a simple simulation of a ball bouncing inside a 3D box. The ball has green a point light inside that moves around with it. This is mostly to show that this simulation is still running on the engine’s modern deferred renderer ;)

The Scripting Model

Key to the scripting model is the ability to talk to the engine from a loaded script and find objects in the scene. This allows the user to visually create worlds in the Vortex Editor and then find the important entities from scripts.

Scripts can also create their own entities of course, but for this example, we just wanted to pre-build the world visually.

For the bouncy ball example in the video above, we started off by creating the containing box, the ball object, and the lights in the scene. We used the Editor tools to create all materials and define the look of the entities and lighting.

But once we have our visual scene, how do we script it?

The entry point for scripts running in Vortex is the vtx namespace. Scripts hosted by Vortex automatically get access to a global table with entry points to the engine.

Functions in the vtx namespace are serviced directly from C++. This is a powerful abstraction that allows exposing virtually all engine functionality to a script.

This is exactly what we did. Through the vtx namespace, the bouncy_ball.lua script easily finds the ball, the walls, and the light. Once we have these objects we can get their transforms and register a function that will update them every frame.

Running Scripts

Once our script is ready, we can bring it into the scene directly from within the Editor.

Currently, loading any script will execute it. This runs all code at the file scope inside it. It’s important that scripts that want to respond to engine events register their callbacks at this point.

In order to run every frame, we are interested in the on_frame event inside the vtx.callbacks table. This table behaves essentially like a list. Once every frame, the engine will walk this list and call all functions registered there.

Pausing and Testing

Since the runloop is controlled directly by the engine, this gives the Editor enormous control over script execution. In particular, we can use the Editor to pause and even step scripts!

Coupled with the Editor’s REPL Lua console, this gives the user a lot of control. Through the Editor UI, the user can stop the scripts and inspect and change any Lua objects in realtime. No need to recompile the Editor or reload the scene or scripts.

Show me the Code!

Ok, we covered a lot of ground above. To help the concepts settle in, here’s the complete bouncy_ball.lua script used to build the simulation shown above. The main points of interest are the main and on_frame functions.

The main function is responsible for finding all important entities in the scene and initializing the simulation. As mentioned before, it is run as soon as the script is loaded into the engine. Notice how the main function adds the on_frame function to the runloop.

The on_frame function runs every frame. It receives a time scale that can be used to implement a framerate-independent simulation.

It is worth noting that nothing in the on_frame function allocates memory. In particular, position components are passed into and pulled out of the engine in the Lua stack, with no heap allocations. This is important, as Lua has a Garbage-Collected runtime and we want to avoid collection pauses during the simulation.

Conclusion

It’s been a lot of fun exploring hosting a scripting language inside the engine and manually building the binding between it and C++.

I think the ability of defining the visual appearance of the scene from the Editor and then allowing scripts to find entities at runtime was the right decision at this time. It’s a simple model that solves the problem elegantly and can be performant if you cache things you need access often.

I am going to continue working on the binding further and seeing how far it can go. It’s a good break from just working on the renderer all the time ;)

I’m definitely interested in your thoughts! Please share below and, as usual, stay tuned for more!

Vortex Engine V3 and Editor Vertical Slice Complete!

Just a few days ahead of the 2018 SIGGRAPH conference, the last few pieces came into place to allow a complete experience from Editor to Runtime.

Vortex Editor and Runtime. Using a .vtx archive, 3D worlds built in the Editor can now be packaged with their resources and run on the Engine. In this image, the sponza scene as set up in Editor, is running on the Vortex Runtime for iOS.

Vortex Editor and Runtime. Using a .vtx archive, 3D worlds built in the Editor can now be packaged with their resources and run on the Engine. In this image, the sponza scene as set up in the Editor, is running on the Vortex Runtime for iOS.

When we set out to revamp the engine and build an editor for Vortex, we wanted to provide users of the engine with a way to intuitively assemble and tweak 3D worlds and then run them on the engine without the fuzz of having to rebuild the app every time.

The Vortex Editor moved us along that direction, allowing the user to visually build their world using point and click. The final missing part was the ability to build a self-contained package that wrapped all the project resources and could be distributed.

Enter the Vortex Archive (.vtx) files.

Vortex Archive files wrap all the resources necessary for the engine to load your created 3D world and run it. With this in place, we now have a full end-to-end experience where a world can be authored completely in the Vortex Editor, then packaged into a Vortex Archive, and ultimately run on any of the supported platforms.

Vortex Archive Format (.vtx)

In order to package the scene manifest and all referenced resources, I ended up designing a custom binary file format for the Archive. I used the extension .vtx, although I originally wanted to call these .var files (after Vortex ARchive). .var is however used for temporary files in UNIX systems so I didn’t want to clash with that convention.

The format in its initial version is pretty simple to read and write. The following table shows how the resources are stored inside the archive.

Size (bytes) Contents
Archive Header
8 Archive Version (currently 1.0)
8 Number of Entries
Contents
8 Resource Path String Length
8 Sub-Resource Identifier String Length
8 Data Lump Size
varies resource path
varies sub-resource path
varies raw file data

The contents section contains all the resources one after the other. The total number of stored resources is given by the archive header, under the “Number of Entries” field.

I could’ve added a “magic” number at the beginning, but all in all, this is a very simple format that binds everything together.

A Note on Compression

As part of the format definition process, I studied compressing each individual resource using zip. Ultimately, I discarded the idea.

Although zip compression would be beneficial for text resources (such as the scene manifest), at this time the vast majority of resources stored are (already) compressed image files. These are not expected to deflate significantly, so I couldn’t justify the increase in complexity at this time.

I might revisit this in the future as we expand scripting support and provide the ability to write custom shaders.

Vortex Runtime

With a complete workflow from Editor to Engine, it’s now possible to completely build a 3D world on desktop and deploy in any of the supported platforms: Windows, Mac, Linux-based OS’s and mobile devices.

Now, as easy as it is to add the engine to a project, there might be cases where we don’t want to write an app just to be able to run a 3D world. Cases where all we need is a thin layer that loads and runs our archive. For these cases, we’ve decided to add a third project into the mix: the Vortex Runtime.

The Runtime is a small frontend app to the engine that can load a Vortex Archive and play it. It’s a minimal, platform-specific app that wraps the underlying intricacies and provides a consistent interface on which Vortex Archives can be run.

Runtimes can be developed for any platform where the engine is available, enabling authored 3D worlds to be deployed virtually anywhere. An advanced user will probably still want to use C++ in order to stay in control, but for building simple playgrounds in Lua, the Runtime might be all that you need.

I think that this is a powerful concept – and a fun one to explore – being able to determine how much of the engine we can expose through the scripting API before you have to dip into native code.

Conclusion

It’s been a long ride since we initially set off to build a vertical of the Editor and revamp the Engine, but it has been worth it.

With these latest additions, we’ve now got a complete tool that we can grow horizontally. It is a starting point we can use to study the implementation of new rendering techniques, as well as further explore tech related to simulation, physics, compilers, scripting APIs and native platforms.

This moment is the culmination of a lot of hard work (during my free time), but it’s not the end. It is the beginning. Stay tuned for more to come!

Deferred Realtime Point Lights

It has been a while since my last update, but I’m excited to share significant progress with the new renderer. As of a couple of weeks ago, the new renderer has now support for realtime deferred point lights!

Point Lights in Vortex Engine 3.0's Deferred Renderer. Sponza scene Copyright (C) Crytek.

Point Lights in Vortex Engine 3.0’s Deferred Renderer. Sponza scene Copyright (C) Crytek.

Point lights in a deferred renderer a bit more complicated to implement than directional lights. For directional lights, we can usually get away with drawing a fullscreen quad to calculate the light contribution to the scene. With point lights, we need to render a light volume for each light, calculating the light contribution for the intersecting meshes.

The following image is from one of the earlier tests I was conducting while implementing the lights. Here, I decided to render the light volumes as wireframe meshes for debugging purposes.

Deferred Point Lights with their Volumes rendered as a wireframe.

Deferred Point Lights with their Volumes rendered as a wireframe.

If you look closely, you can see how each light is contained to a sphere and only contributes to the portions of the scene it is intersecting. This is the great advantage of a deferred renderer when compared to a traditional forward renderer.

In a forward renderer, we would have had to draw the entire scene for each light. Only at the very end of the pipeline, would we realize that a point light contributed nothing to a fragment. At this point, however, we would have already performed all the operations in the fragment shader. In comparison, a deferred renderer only computes the subsection of the screen affected by each light volume. This allows for having very large numbers of realtime lights in a scene, with the total cost of having lots of lights on screen amounting to about just one big light.

Determining Light Intersections

One problem that arises when rendering point light volumes is determining the intersection with the scene geometry. There are different ways of solving this problem. I decided to base my approach on this classic presentation by NVIDIA.

Light Volume Stencil Testing. We use the stencil buffer to determine which fragments are at the intersection of the light volume with a mesh.

Light Volume Stencil Testing. We use the stencil buffer to determine which fragments are at the intersection of the light volume with a mesh.

The idea is to use the stencil buffer to cleverly test the light volumes against the z-buffer. In order for this to work, I had to do a pre-pass, rendering the back faces of the light volumes. During this pass, we update the stencil value only on z-fail. Z-fail means that we can’t see the back of our light volume because another mesh is there – exactly the intersection we’re looking for!

Once the stencil buffer pass is complete, we do a second pass of the light volumes, this time with the stencil test set to match the reference value (and z-testing disabled). The fragments where the test passes are lit by the light.

The image above shows the idea. In it, you can see how the light volume determines the fragments that the point light is affecting.

Screenshots

Here are some more screenshots of the technique.

In the following image, only the lion head had a bump map. For the rest of the meshes, we’re just using the geometric normal. Even as I was building this system, I was in awe at the incredible interaction of normal mapping with the deferred point lights. Take a look at the lion head (zoom in for more details), the results are astounding.

Vortex Engine 3.0 - Deferred Point Lights interacting with normal mapped and non-normal mapped surfaces.

Vortex Engine 3.0 – Deferred Point Lights interacting with normal mapped and non-normal mapped surfaces.

Here’s our old friend, the test cube, being lit by 3 RGB point lights.

Vortex Engine 3.0 - Our trusty old friend, the test cube, being lit by 3 realtime deferred point lights.

Vortex Engine 3.0 – Our trusty old friend, the test cube, being lit by 3 realtime deferred point lights.

I’m still playing with the overall light intensity scale (i.e. what does “full intensity” mean?). Lights are pretty dim in the Sponza scene, so I might bring them up across the board to be more like in the cube image.

Conclusion

Deferred rendering is definitely an interesting technique that brings a lot to the table. In recent years, it has become superseded by more modern techniques like Forward+, however, the results are undeniable – especially when combined with elaborate shading techniques such as normal mapping.

The next steps will be to implement spot light support and start implementing post processing techniques.

Stay tuned for more!

Multiple Directional Realtime Lights

This week work went into supporting multiple directional realtime lights in the new Vortex V3 renderer.

Multiple Realtime Lights rendered by Vortex V3.

Multiple Realtime Lights rendered by Vortex V3.


In the image above, we have a scene composed of several meshes, each with its own material, being affected by three directional lights. The lights have different directions and colors and the final image is a composition of all the color contributions coming from each light.

In order to make the most out of this functionality in the engine, I revamped the Light Component Inspector. It’s now possible to set the direction and color through the UI and see the results affect the scene immediately. You can see the new UI in the screenshot above.

Now, since lights are entities, I considered reusing the entity’s rotation as a way to rotate a predefined vector and thus defining the light. In the end, however, I decided against it. The main reason was that I think it is more clear to explicitly set the direction vector in the UI rather than having the user play with angles in their head to figure out an obscure internal vector. This way, you can specify the vector directly.

I’m pretty happy with the results. Internally, each light is computed individually and then all contributions are additive-blended onto the framebuffer. This means the cost of render n objects affected by m lights is going to be n + m draw calls. This is a big advantage over the forward rendering equivalent, which would require at least n * m draw calls.

Notably missing from the image above is color bleed. Photorealism is addictive: the more your approximate real life, the more you can tell when an image is synthetic if something is missing. This will be a topic for another time however.

Next week I want to make some additions to the material system to make it more powerful, as well as start implementing omnidirectional lights.

Stay tuned for more!

Procedural Sphere Primitive and Specular Highlights

This week I spent some time adding specular highlights to the Deferred Renderer. The results can be seen below.

A procedural sphere with specular highlights. Notice the dramatic difference that normal mapping makes when computing the highlights.

A procedural sphere with specular highlights. Notice the dramatic difference that normal mapping makes when computing the highlights.

Since the box was not the best primitive to test specular highlights, I added a sphere primitive to the engine. This primitive is also going to come in handy when we start adding point lights.

The sphere primitive includes normal and texture coordinate data, enabling shading from both the regular deferred light pass and the normal mapping pass. Both passes are shown in the image above. The difference is dramatic. Notice how normal mapping helps convey the illusion of a rough brick surface, whereas the “geometrical” normal just makes it look flat.

Please don’t mind the glitches in the GIF above. Overridding the mouse cursor makes the Screen To Gif tool I use act up a little bit. I promise these are not Editor bugs ;)

Custom Computer @ SIGGRAPH

I’m back from SIGGRAPH 2017 and I wanted to share with you this amazing custom computer I saw at the expo.

A computer made out of a Tegra K1 SoC, a touch screen and a custom-built enclosure.

A computer made out of a Tegra K1 SoC, a touch screen and a custom-built enclosure.

This computer consists of a NVIDIA Tegra K1 System-on-Chip (SoC) that has a touch screen connected. It is housed in a custom-built enclosure that somewhat resembles an iMac and other “all in one” PCs out there.

What I find fascinating is how using off-the-shelf components, one can simply build an “appliance” that showcases 3D tech and/or art for a booth.

In this case, the machine is demoing a PBR renderer built for the Qt framework. Of course, I can envision this as an interesting showcase for the Vortex Engine as well. Because the renderer can scale from a dedicated desktop GPU all the way down to an iPhone, and it already supports Linux-based systems, it would be easy to build something similar. This would also give me the opportunity to play with some hardware in my free time : )

I’m excited about this project. I will start looking into it once the renderer is more mature.

Normal Mapping 2.0

This week we swtiched gears back into Rendering! A lot of work went into building Normal Mapping in the new Renderer. The following image shows the dramatic results:

Normal mapping in the new Deferred Renderer.

Normal mapping in the new Deferred Renderer.

Here, I switch back and forth between the regular Geometry Pass shader and a Normal Mapping-aware shader. Notice how Normal mapping dramatically changes the appearance of the bricks, making them feel less part of a flat surface and more like a real, coarse surface.

I initially discussed Normal Mapping back in 2014, so I definitely recommend you check out that post for more details on how the technique works. The biggest difference in building Normal Mapping in Vortex V3 compared to Vortex 2.0 was implementing it on top of the new Deferred Renderer.

There is more work to be done in terms of Normal Mapping, such as adding specular mapping, but I’m happy with the results so far. Next week we continue working on graphics! Stay tuned for more!

Putting it all together

This week has been a big one in terms of wrangling together several big pillars of the engine to provide wider functionality. The image below shows how we can now dynamically run an external Lua script that modifies the 3D world on the fly:

Vortex loading and running an external script that changes the texture of an entity's material.

Vortex loading and running an external script that changes the texture of an entity’s material.

In the image above, I’ve created two boxes. Both of these have different materials and each material references a different texture.

What you can see I’m doing is that I “mistakenly” drag from the Asset Library a character texture and assign it as the second box’s texture. Oh no! How can we fix this? It’s easy: just run an external script that will assign the first box’s texture to the second!

I’ve pasted the code of the script below:

As you can see, the script is pretty straightforward. It finds the boxes, drills all the way down to their materials and then assigns the texture of the first box to the second. The changes are immediately seen in the 3D world.

It’s worth noting that all function calls into the vtx namespace and derived objects are actually jumping into C++. This script is therefore dynamically manipulating engine objects, that’s why we see its effects in the scene view.

The function names are still work in progress, and admittedly, I need to write more scripts to see if these feel comfortable or if they’re too long and therefore hard to remember. My idea is to make the scripting interface as simple to use as possible, so please if you have any suggestions I would love to hear your feedback! Feel free to leave a comment below!

Next week I will continue working on adding more functionality to the scripting API, as well as adding more features to the renderer! Stay tuned for more!

Component Introspection

This week, work on the scripting interface continued. As the image below shows, I can now access an entity’s components and even drill down to its Material through the Lua console.

Introspecting a Render Component to access its material via the Lua interface.

Introspecting a Render Component to access its material via the Lua interface.

The image above shows an example of the scripting interface for entities and components. Here, we are creating a new Entity from the builtin Box primitive and then finding its first component of type “1”. Type 1 is an alias to the Render Component of the Entity. It is responsible for binding a Mesh and a Material together.

Once we have the Render Component, we use it to access its Material property and print its memory address.

As the image shows, although Lua allows for very flexible duck typing, I am performing type checking behind the scenes to make scripting mistakes obvious to the user. Allow me to elaborate:

For the interface, I’ve decided that all components will be hidden behind the vtx.Component “class”. Now, this class will be responsible to exposing the interface to all native component methods, such as get_material(), set_mesh(), get_transform() and so forth.

The problem is, how do we prevent trying to access the material property of a Component that doesn’t have one, such as the Md2AnimationComponent? In my mind, there are two ways. I’m going to call them the “JavaScript” way and the “Python” way.

In the JavaScript way, we don’t really care. We allow calling any method on any component and silently fail when a mismatch is detected. We may return nil or “undefined”, but at no point are we raising an error.

In the Python way, we will perform a sanity check before actually invoking the function on the component, and actually halt the operation when an error is found. You can see that in the example above. Here we’re purposefully attempting to get the material of a Base Component (component type 0), which doesn’t have one. In this case, the Engine detects the inconsistency and raises an error.

I feel the Python way is the way to go to prevent subtle hard-to-debug errors arising from allowing any method to be called on any component and happily carrying on through to -hopefully- reach some sort of result.

A third alternative would have been to actually expose a separate “class” for every component type. This would certainly work, but I’m concerned about a potential “class explosion”, as we continue to add more and more components to the Engine. Furthermore, I feel strongly typed duck typing is a good approach, well in tune with the language philosophy, for a language like Lua.

Now that we can drill all the way down to an Entity’s material, it’s time to expand the interface to allow setting the shader and the material properties, allowing the script developer to control how entities are rendered by the Engine. Stay tuned for more!

Vortex Editor turns 1 year!

I hadn’t realized, but the Vortex Editor turned one year old a couple of months ago. I set off to work on this project in my free time with a clear set of objectives and it’s hard to believe that one year has already passed since the initial kick-off.

Of course, the Editor is closely tied to the Engine, which has seen its fair share of improvements through this year. From building an entirely new deferred renderer to completely replacing the node-based scene graph system to an Entity Component System model that is flexible and extensible, enhancements have been wide and deep.

This post is a short retrospective on the accomplishments of the Vortex Editor and Vortex Engine through this last year.

Vortex Engine Achievements in the last year:

  1. Kicked off the third iteration of the Vortex Engine, codename “V3”.
  2. Upgraded the Graphics API to Core OpenGL 3.3 on Desktop and OpenGL ES 3.0 on Mobile.
  3. Implemented Deferred Rendering from scratch using MRT. Establish the base for PBR rendering.
  4. New Entity Component System model, far more flexible than the old scene graph model and with support for Native and Script Components
  5. Overhaul of several internal engine facilities, such as the Material and Texture systems.
  6. Completely redesigned engine facilities such as Lights and Postprocessing.
  7. New Lua-powered engine scripting.
  8. Ported to Windows, fixing several cross-compiler issues along the way. The engine now builds with clang,
    GCC and MSVC and runs on Linux, Mac, Windows, iOS and Android.
  9. Started moving codebase to Modern C++ (C++11).

Vortex Editor Achievements in the last year:

  1. Successfully kicked off the project. Built from scratch in C++.
  2. Built a comprehensive, modular UI with a context-sensitive architecture that adjusts what you’re doing.
  3. Bootstrapped the project using Vortex Engine 2.0, then quickly moved to V3 once things were stable.
  4. Provide basic serialization/deserialization support for saving and loading projects.
  5. Implemented a Lua REPL that allows talking to the engine directly and script the Editor.
  6. Friendly Drag-and-Drop interface for instantiating new Entities.
  7. Complete visual control over an Entity’s position, orientation and scale in the world, as well as the configuration of its components.
  8. Allow dynamically adding new components to entities to change their behavior.

It has been quite a ride for the last year. I believe all these changes have successfully built and expanded upon the 5 years of work on Vortex 1.1 and 2.0. I’m excited about continuing to work on these projects to ultimately come up with a product that is fun to tinker with.

My objectives for this year two of the Editor include: implement scene and asset packaging, expanded scripting support and PBR rendering.

Stay tuned for more!