2018: Vortex Retrospective

Despite what it might look like from reviewing this blog’s history, a lot of work went into Vortex through the course of 2018. So much so, in fact, that just like I did a retrospective for 2016 with lessons learned then, I wanted to take some time to reflect on the state of Vortex at the end of 2018.

Codebase after 2.5 years

April 11, 2016 marks the date where we decided to kick off Vortex V3 by means of building it a visual editor from scratch.

C++ Logo

Back then progress was crazy fast, with every coding session adding lots of new features worth covering in full.

2.5 years later, feature work continues, but at a more steady pace. Making changes to systems requires careful engineering and testing to make sure existing features are not broken, and in some cases, a small fix might end up in a big refactor of a less-than-polished system.

This is normal and expected, but it has been a factor in the cadence of new material in the blog. Sorry about that :)

On the flip side, building atop a more complete system enables focusing on newer features and on more polish, since we are not constantly bootstrapping all the systems in the engine anymore. This leads us to…

Vertical Slice Complete!

As work on persistence and Vortex Archives became more complete, having expanded the renderer, having introduced support for Lua scripting, and a complete Editor-to-Device workflow, we have reached a point where I think we could consider the Vertical Slice of Vortex V3 complete!

Vortex Editor and Runtime. Using a .vtx archive, 3D worlds built in the Editor can now be packaged with their resources and run on the Engine. In this image, the sponza scene is running on the Vortex Runtime for iOS.

Vortex Editor and Runtime. Using a .vtx archive, 3D worlds built in the Editor can now be packaged with their resources and run on the Engine. In this image, the sponza scene is running on the Vortex Runtime for iOS.

This is a big milestone. We set off to building an Editor that could be used for visually assembling playgrounds, scripting them, serializing and deploying them in a wide array of platforms. This year we have finally achieved this goal.

There is something truly special about the possibility of opening up the engine for users to script via Lua.

As of now, the limit is no longer whatever was pre-built into the engine. We are at a point where we can power the user’s imagination with (many) limitations removed.

On that topic,

Lua Scripting

Lua has been an absolute joy to work with. The code is neat, self contained, very rich and portable. In the Engine we we able to easily wrap the VM and run it both in Editor and on iOS devices.

Exposing the engine’s API opens up tones of possibilities where we are no longer just building internal systems, but rather we’re publishing a “service” that scripts can leverage to simulate and change the 3D world.

Not many people know this, but Vortex has had very comprehensive terrain generation and spline evaluation logic for a number of years now. These features are completely buried, as they are more playground-level than engine-specific code. With Lua, we can now surface these to the user as pre-packaged, efficient, native facilities.

Editor Run and Edit Modes

As soon as we opened up the possibility of having scripts running in editor changing the world we knew we would need to have a way to revert the changes made by scripts to an unsaved scene. By leveraging the now complete persistence system, this task was very easy to achieve.

The new Waypoint Tween Component is used to move a 3D model between four points.

The new Waypoint Tween Component is used to move a 3D model between four points.

In editor, when you start running your Lua scripts, the Editor will behind the scenes serialize the scene into a temporary manifest that is persisted somewhere. You can pause or continue running your scripts as normal, with them altering the world.

As soon as the user decides to stop script execution, the scene will be restored from the persisted manifest and the VM reset.

There is more work that can be performed here to ensure better performance (shorter deserialization cycles), but the core concept is a very powerful one, as it allows freely testing scripts before packaging the playground for distribution.

This of course leads to,

Vortex Runtimes

This concept came out of left field, considering we set out to “only” building an editor and revamping the engine. Once the use case was identified however, it all but made sense.

The Bouncy Ball Playground, which consists in a ball animated from a Lua script, running on iPhone from a Vortex Archive.

The Bouncy Ball Playground, which consists in a ball animated from a Lua script, running on iPhone from a Vortex Archive.

Vortex runtimes are lightweight apps built on the engine that allow loading a Vortex Archive (generated via the editor) and running it on the target platform.

We started off by implementing an iOS runtime. It consists of a simple UI that allows selecting which archive to run (coming from a list of predefined playgrounds), choosing a rendering backend and loading and running it.

Runtimes allow us to bring playgrounds to any platform without having to port the entire editor over, nor requiring the user to build her custom C++ app that hosts the engine and without having to worry about how to draw it all.

What that means is,

All Things Rendering

On the topic of rendering, we have now reached a point where the renderer is starting to look acceptable enough to be able to produce richer visuals than ever before in the history of the engine (so expect to see more screenshots and videos on this blog!).

The plan for next year is to continue pushing in this direction, building on a solid foundation and adding more high-fidelity visual techniques.

In addition to this work, this year saw the addition of a new native Metal renderer to the engine. The renderer is simple, but 100% compatible with the inputs taken by the rendering system used on other platforms. Metal was also a joy to work with. Very modern API with very good design decisions.

In Closing

A big year for Vortex V3, with many efforts that started about 2 years ago starting to take shape.

We now have 3 major verticals including editor, scripting API, and graphics (with OpenGL and Metal). Two years ago we talked about how 2 engineers could work on this project full time. I think that, after this year, we could easily keep 3 engineers busy.

This was also a year where we started to see some tech debt rear its head, but we’ve been able to keep it under control. There are some major refactors that are going to be required for enhanced performance, but we are not at a point yet where I would prioritize that work over adding more features that directly improve a user’s experience.

We want to thank you for joining us throughout this year. We’re looking forward to what’s to come in 2019 and, as usual, stay tuned for more ;)

Scripting the Runtime

This past weekend I spent some time making sure the Lua scripting runtime worked correctly on the iOS runtime. While archiving already took care of including scripts into generated Vortex Archives, the Lua runtime was not being invoked on device. This had to be addressed :)

The Bouncy Ball Playground, which consists in a ball animated from a Lua script, running on iPhone from a Vortex Archive.

The Bouncy Ball Playground, which consists in a ball animated from a Lua script, running on iPhone from a Vortex Archive.

First things, first

Last time we used the Vortex Editor to build and script a scene where a ball bounces inside a brick box. Let’s see if we can load it up in the iOS Runtime.

We have to start by loading our project into the Vortex Editor on Windows. Once open, we verify that it runs and that the simulation is working as expected. Once we have validated everything is correct, we can “build” the project into a Vortex Archive.

The process of archiving collects all the assets in the project folder and puts them into a binary format. This, of course, includes all Lua script files in the folder.

Moving to the Mac

Awesome, we have our playground archive. How do we put it on device? We will use the Vortex Runtime for iOS for this.

We will take the Vortex Archive and place it under the bundled resources for our App. In the future, we could implement a system that pulls playgrounds from Dropbox, Firebase or iTunes, but for now this is enough.

When running on device, the runtime will extract the archive and deserialize the playground. The unarchiver keeps track of all extracted resources, so all we have to do is scan this list for all Lua files and run them in sequence.

The order of execution is alphabetical, but this could easily be expanded in the future.

Running on Device

The Vortex Runtime is different from the Editor. We will probably never want to edit a playground on device directly – we just want to run it as soon as it’s fully loaded. That’s what we’re going to do!

Xcode profile of the Vortex Runtime running the Bouncy Ball playground.

Xcode profile of the Vortex Runtime running the Bouncy Ball playground.

Now, one concern that I had was the impact that running the Lua scripts might have on the device’s CPU usage, and whether vanilla Lua would be still enough or if I should plan on incorporating LuaJIT soon.

Xcode provides a great way to look into the Hardware as we test our Apps on it. As it can be seen in the image above, even with vanilla Lua, CPU usage stays at a maximum of 19%. This is a mere 1% increase from before (no scripting). Granted, the script being executed is pretty simple, but keep in mind this is all running at 60 FPS. I think I am going to keep it this way (for now).

Conclusion

We finally have a complete Editor-to-Runtime experience now where we can use our PC to visually create a playground, script it, simulate it, package it all together, and load it directly into the runtime.

I was very pleasantly surprised by the results of running the Bouncy Ball Lua script on device. It is a simple script but it shows just how polished the Lua VM is. After this test, we now know that we have enough headroom to build more involved 60 FPS simulations into our playgrounds. I’m excited about what’s to come next.

Stay tuned for more!

Developing the Scripting API

With the vertical slice complete and the 3D renderer in a good spot, this week I decided to shift focus to the scripting API of the engine.

Scripting is a very desirable feature for any engine. It allows adding (and modifying) logic on the fly, without having to recompile or relink any parts of the program. It makes iteration times super fast, enabling creativity.

In Vortex, we chose Lua for the scripting backend. We added initial support about a year ago. At that time, we decided to build a custom binding from scratch and we succeeded, but the work done was mostly proof of concept. This weekend, the objective was to expand this foundation so scripts could perform more useful tasks, such as inspecting and manipulating the world.

In order to achieve this, a number of changes were needed, both at the scripting level and at the editor level. In particular, we needed:

  • A way to wrap and expose entity transforms to Lua scripts.
  • A way to mutate these transforms.
  • A way for scripts to add themselves to the runloop and run logic every frame.
  • A way for the engine and editor to run (and “step”) scripts.
  • A way to hot reload scripts and rebooting VM when things went south.

The video above shows all these concepts coming together to allow creating a simple simulation of a ball bouncing inside a 3D box. The ball has green a point light inside that moves around with it. This is mostly to show that this simulation is still running on the engine’s modern deferred renderer ;)

The Scripting Model

Key to the scripting model is the ability to talk to the engine from a loaded script and find objects in the scene. This allows the user to visually create worlds in the Vortex Editor and then find the important entities from scripts.

Scripts can also create their own entities of course, but for this example, we just wanted to pre-build the world visually.

For the bouncy ball example in the video above, we started off by creating the containing box, the ball object, and the lights in the scene. We used the Editor tools to create all materials and define the look of the entities and lighting.

But once we have our visual scene, how do we script it?

The entry point for scripts running in Vortex is the vtx namespace. Scripts hosted by Vortex automatically get access to a global table with entry points to the engine.

Functions in the vtx namespace are serviced directly from C++. This is a powerful abstraction that allows exposing virtually all engine functionality to a script.

This is exactly what we did. Through the vtx namespace, the bouncy_ball.lua script easily finds the ball, the walls, and the light. Once we have these objects we can get their transforms and register a function that will update them every frame.

Running Scripts

Once our script is ready, we can bring it into the scene directly from within the Editor.

Currently, loading any script will execute it. This runs all code at the file scope inside it. It’s important that scripts that want to respond to engine events register their callbacks at this point.

In order to run every frame, we are interested in the on_frame event inside the vtx.callbacks table. This table behaves essentially like a list. Once every frame, the engine will walk this list and call all functions registered there.

Pausing and Testing

Since the runloop is controlled directly by the engine, this gives the Editor enormous control over script execution. In particular, we can use the Editor to pause and even step scripts!

Coupled with the Editor’s REPL Lua console, this gives the user a lot of control. Through the Editor UI, the user can stop the scripts and inspect and change any Lua objects in realtime. No need to recompile the Editor or reload the scene or scripts.

Show me the Code!

Ok, we covered a lot of ground above. To help the concepts settle in, here’s the complete bouncy_ball.lua script used to build the simulation shown above. The main points of interest are the main and on_frame functions.

The main function is responsible for finding all important entities in the scene and initializing the simulation. As mentioned before, it is run as soon as the script is loaded into the engine. Notice how the main function adds the on_frame function to the runloop.

The on_frame function runs every frame. It receives a time scale that can be used to implement a framerate-independent simulation.

It is worth noting that nothing in the on_frame function allocates memory. In particular, position components are passed into and pulled out of the engine in the Lua stack, with no heap allocations. This is important, as Lua has a Garbage-Collected runtime and we want to avoid collection pauses during the simulation.

Conclusion

It’s been a lot of fun exploring hosting a scripting language inside the engine and manually building the binding between it and C++.

I think the ability of defining the visual appearance of the scene from the Editor and then allowing scripts to find entities at runtime was the right decision at this time. It’s a simple model that solves the problem elegantly and can be performant if you cache things you need access often.

I am going to continue working on the binding further and seeing how far it can go. It’s a good break from just working on the renderer all the time ;)

I’m definitely interested in your thoughts! Please share below and, as usual, stay tuned for more!

Vortex Engine V3 and Editor Vertical Slice Complete!

Just a few days ahead of the 2018 SIGGRAPH conference, the last few pieces came into place to allow a complete experience from Editor to Runtime.

Vortex Editor and Runtime. Using a .vtx archive, 3D worlds built in the Editor can now be packaged with their resources and run on the Engine. In this image, the sponza scene as set up in Editor, is running on the Vortex Runtime for iOS.

Vortex Editor and Runtime. Using a .vtx archive, 3D worlds built in the Editor can now be packaged with their resources and run on the Engine. In this image, the sponza scene as set up in the Editor, is running on the Vortex Runtime for iOS.

When we set out to revamp the engine and build an editor for Vortex, we wanted to provide users of the engine with a way to intuitively assemble and tweak 3D worlds and then run them on the engine without the fuzz of having to rebuild the app every time.

The Vortex Editor moved us along that direction, allowing the user to visually build their world using point and click. The final missing part was the ability to build a self-contained package that wrapped all the project resources and could be distributed.

Enter the Vortex Archive (.vtx) files.

Vortex Archive files wrap all the resources necessary for the engine to load your created 3D world and run it. With this in place, we now have a full end-to-end experience where a world can be authored completely in the Vortex Editor, then packaged into a Vortex Archive, and ultimately run on any of the supported platforms.

Vortex Archive Format (.vtx)

In order to package the scene manifest and all referenced resources, I ended up designing a custom binary file format for the Archive. I used the extension .vtx, although I originally wanted to call these .var files (after Vortex ARchive). .var is however used for temporary files in UNIX systems so I didn’t want to clash with that convention.

The format in its initial version is pretty simple to read and write. The following table shows how the resources are stored inside the archive.

Size (bytes) Contents
Archive Header
8 Archive Version (currently 1.0)
8 Number of Entries
Contents
8 Resource Path String Length
8 Sub-Resource Identifier String Length
8 Data Lump Size
varies resource path
varies sub-resource path
varies raw file data

The contents section contains all the resources one after the other. The total number of stored resources is given by the archive header, under the “Number of Entries” field.

I could’ve added a “magic” number at the beginning, but all in all, this is a very simple format that binds everything together.

A Note on Compression

As part of the format definition process, I studied compressing each individual resource using zip. Ultimately, I discarded the idea.

Although zip compression would be beneficial for text resources (such as the scene manifest), at this time the vast majority of resources stored are (already) compressed image files. These are not expected to deflate significantly, so I couldn’t justify the increase in complexity at this time.

I might revisit this in the future as we expand scripting support and provide the ability to write custom shaders.

Vortex Runtime

With a complete workflow from Editor to Engine, it’s now possible to completely build a 3D world on desktop and deploy in any of the supported platforms: Windows, Mac, Linux-based OS’s and mobile devices.

Now, as easy as it is to add the engine to a project, there might be cases where we don’t want to write an app just to be able to run a 3D world. Cases where all we need is a thin layer that loads and runs our archive. For these cases, we’ve decided to add a third project into the mix: the Vortex Runtime.

The Runtime is a small frontend app to the engine that can load a Vortex Archive and play it. It’s a minimal, platform-specific app that wraps the underlying intricacies and provides a consistent interface on which Vortex Archives can be run.

Runtimes can be developed for any platform where the engine is available, enabling authored 3D worlds to be deployed virtually anywhere. An advanced user will probably still want to use C++ in order to stay in control, but for building simple playgrounds in Lua, the Runtime might be all that you need.

I think that this is a powerful concept – and a fun one to explore – being able to determine how much of the engine we can expose through the scripting API before you have to dip into native code.

Conclusion

It’s been a long ride since we initially set off to build a vertical of the Editor and revamp the Engine, but it has been worth it.

With these latest additions, we’ve now got a complete tool that we can grow horizontally. It is a starting point we can use to study the implementation of new rendering techniques, as well as further explore tech related to simulation, physics, compilers, scripting APIs and native platforms.

This moment is the culmination of a lot of hard work (during my free time), but it’s not the end. It is the beginning. Stay tuned for more to come!

Deferred Realtime Point Lights

It has been a while since my last update, but I’m excited to share significant progress with the new renderer. As of a couple of weeks ago, the new renderer has now support for realtime deferred point lights!

Point Lights in Vortex Engine 3.0's Deferred Renderer. Sponza scene Copyright (C) Crytek.

Point Lights in Vortex Engine 3.0’s Deferred Renderer. Sponza scene Copyright (C) Crytek.

Point lights in a deferred renderer a bit more complicated to implement than directional lights. For directional lights, we can usually get away with drawing a fullscreen quad to calculate the light contribution to the scene. With point lights, we need to render a light volume for each light, calculating the light contribution for the intersecting meshes.

The following image is from one of the earlier tests I was conducting while implementing the lights. Here, I decided to render the light volumes as wireframe meshes for debugging purposes.

Deferred Point Lights with their Volumes rendered as a wireframe.

Deferred Point Lights with their Volumes rendered as a wireframe.

If you look closely, you can see how each light is contained to a sphere and only contributes to the portions of the scene it is intersecting. This is the great advantage of a deferred renderer when compared to a traditional forward renderer.

In a forward renderer, we would have had to draw the entire scene for each light. Only at the very end of the pipeline, would we realize that a point light contributed nothing to a fragment. At this point, however, we would have already performed all the operations in the fragment shader. In comparison, a deferred renderer only computes the subsection of the screen affected by each light volume. This allows for having very large numbers of realtime lights in a scene, with the total cost of having lots of lights on screen amounting to about just one big light.

Determining Light Intersections

One problem that arises when rendering point light volumes is determining the intersection with the scene geometry. There are different ways of solving this problem. I decided to base my approach on this classic presentation by NVIDIA.

Light Volume Stencil Testing. We use the stencil buffer to determine which fragments are at the intersection of the light volume with a mesh.

Light Volume Stencil Testing. We use the stencil buffer to determine which fragments are at the intersection of the light volume with a mesh.

The idea is to use the stencil buffer to cleverly test the light volumes against the z-buffer. In order for this to work, I had to do a pre-pass, rendering the back faces of the light volumes. During this pass, we update the stencil value only on z-fail. Z-fail means that we can’t see the back of our light volume because another mesh is there – exactly the intersection we’re looking for!

Once the stencil buffer pass is complete, we do a second pass of the light volumes, this time with the stencil test set to match the reference value (and z-testing disabled). The fragments where the test passes are lit by the light.

The image above shows the idea. In it, you can see how the light volume determines the fragments that the point light is affecting.

Screenshots

Here are some more screenshots of the technique.

In the following image, only the lion head had a bump map. For the rest of the meshes, we’re just using the geometric normal. Even as I was building this system, I was in awe at the incredible interaction of normal mapping with the deferred point lights. Take a look at the lion head (zoom in for more details), the results are astounding.

Vortex Engine 3.0 - Deferred Point Lights interacting with normal mapped and non-normal mapped surfaces.

Vortex Engine 3.0 – Deferred Point Lights interacting with normal mapped and non-normal mapped surfaces.

Here’s our old friend, the test cube, being lit by 3 RGB point lights.

Vortex Engine 3.0 - Our trusty old friend, the test cube, being lit by 3 realtime deferred point lights.

Vortex Engine 3.0 – Our trusty old friend, the test cube, being lit by 3 realtime deferred point lights.

I’m still playing with the overall light intensity scale (i.e. what does “full intensity” mean?). Lights are pretty dim in the Sponza scene, so I might bring them up across the board to be more like in the cube image.

Conclusion

Deferred rendering is definitely an interesting technique that brings a lot to the table. In recent years, it has become superseded by more modern techniques like Forward+, however, the results are undeniable – especially when combined with elaborate shading techniques such as normal mapping.

The next steps will be to implement spot light support and start implementing post processing techniques.

Stay tuned for more!

Multiple Directional Realtime Lights

This week work went into supporting multiple directional realtime lights in the new Vortex V3 renderer.

Multiple Realtime Lights rendered by Vortex V3.

Multiple Realtime Lights rendered by Vortex V3.


In the image above, we have a scene composed of several meshes, each with its own material, being affected by three directional lights. The lights have different directions and colors and the final image is a composition of all the color contributions coming from each light.

In order to make the most out of this functionality in the engine, I revamped the Light Component Inspector. It’s now possible to set the direction and color through the UI and see the results affect the scene immediately. You can see the new UI in the screenshot above.

Now, since lights are entities, I considered reusing the entity’s rotation as a way to rotate a predefined vector and thus defining the light. In the end, however, I decided against it. The main reason was that I think it is more clear to explicitly set the direction vector in the UI rather than having the user play with angles in their head to figure out an obscure internal vector. This way, you can specify the vector directly.

I’m pretty happy with the results. Internally, each light is computed individually and then all contributions are additive-blended onto the framebuffer. This means the cost of render n objects affected by m lights is going to be n + m draw calls. This is a big advantage over the forward rendering equivalent, which would require at least n * m draw calls.

Notably missing from the image above is color bleed. Photorealism is addictive: the more your approximate real life, the more you can tell when an image is synthetic if something is missing. This will be a topic for another time however.

Next week I want to make some additions to the material system to make it more powerful, as well as start implementing omnidirectional lights.

Stay tuned for more!

Procedural Sphere Primitive and Specular Highlights

This week I spent some time adding specular highlights to the Deferred Renderer. The results can be seen below.

A procedural sphere with specular highlights. Notice the dramatic difference that normal mapping makes when computing the highlights.

A procedural sphere with specular highlights. Notice the dramatic difference that normal mapping makes when computing the highlights.

Since the box was not the best primitive to test specular highlights, I added a sphere primitive to the engine. This primitive is also going to come in handy when we start adding point lights.

The sphere primitive includes normal and texture coordinate data, enabling shading from both the regular deferred light pass and the normal mapping pass. Both passes are shown in the image above. The difference is dramatic. Notice how normal mapping helps convey the illusion of a rough brick surface, whereas the “geometrical” normal just makes it look flat.

Please don’t mind the glitches in the GIF above. Overridding the mouse cursor makes the Screen To Gif tool I use act up a little bit. I promise these are not Editor bugs ;)

Custom Computer @ SIGGRAPH

I’m back from SIGGRAPH 2017 and I wanted to share with you this amazing custom computer I saw at the expo.

A computer made out of a Tegra K1 SoC, a touch screen and a custom-built enclosure.

A computer made out of a Tegra K1 SoC, a touch screen and a custom-built enclosure.

This computer consists of a NVIDIA Tegra K1 System-on-Chip (SoC) that has a touch screen connected. It is housed in a custom-built enclosure that somewhat resembles an iMac and other “all in one” PCs out there.

What I find fascinating is how using off-the-shelf components, one can simply build an “appliance” that showcases 3D tech and/or art for a booth.

In this case, the machine is demoing a PBR renderer built for the Qt framework. Of course, I can envision this as an interesting showcase for the Vortex Engine as well. Because the renderer can scale from a dedicated desktop GPU all the way down to an iPhone, and it already supports Linux-based systems, it would be easy to build something similar. This would also give me the opportunity to play with some hardware in my free time : )

I’m excited about this project. I will start looking into it once the renderer is more mature.

Normal Mapping 2.0

This week we swtiched gears back into Rendering! A lot of work went into building Normal Mapping in the new Renderer. The following image shows the dramatic results:

Normal mapping in the new Deferred Renderer.

Normal mapping in the new Deferred Renderer.

Here, I switch back and forth between the regular Geometry Pass shader and a Normal Mapping-aware shader. Notice how Normal mapping dramatically changes the appearance of the bricks, making them feel less part of a flat surface and more like a real, coarse surface.

I initially discussed Normal Mapping back in 2014, so I definitely recommend you check out that post for more details on how the technique works. The biggest difference in building Normal Mapping in Vortex V3 compared to Vortex 2.0 was implementing it on top of the new Deferred Renderer.

There is more work to be done in terms of Normal Mapping, such as adding specular mapping, but I’m happy with the results so far. Next week we continue working on graphics! Stay tuned for more!

Putting it all together

This week has been a big one in terms of wrangling together several big pillars of the engine to provide wider functionality. The image below shows how we can now dynamically run an external Lua script that modifies the 3D world on the fly:

Vortex loading and running an external script that changes the texture of an entity's material.

Vortex loading and running an external script that changes the texture of an entity’s material.

In the image above, I’ve created two boxes. Both of these have different materials and each material references a different texture.

What you can see I’m doing is that I “mistakenly” drag from the Asset Library a character texture and assign it as the second box’s texture. Oh no! How can we fix this? It’s easy: just run an external script that will assign the first box’s texture to the second!

I’ve pasted the code of the script below:

As you can see, the script is pretty straightforward. It finds the boxes, drills all the way down to their materials and then assigns the texture of the first box to the second. The changes are immediately seen in the 3D world.

It’s worth noting that all function calls into the vtx namespace and derived objects are actually jumping into C++. This script is therefore dynamically manipulating engine objects, that’s why we see its effects in the scene view.

The function names are still work in progress, and admittedly, I need to write more scripts to see if these feel comfortable or if they’re too long and therefore hard to remember. My idea is to make the scripting interface as simple to use as possible, so please if you have any suggestions I would love to hear your feedback! Feel free to leave a comment below!

Next week I will continue working on adding more functionality to the scripting API, as well as adding more features to the renderer! Stay tuned for more!