Excellent Intro to Qt Quick

I came across this video that provides a great introduction to the Qt Quick controls in Qt 5.

It’s very interesting to see how a fully fledged, cross platform app that consumes a Web API can be developed in just over 15 minutes almost without a single line of code.

After seeing this video, I’ve been looking a little more into how C++ can be integrated into Qt Quick apps and, unfortunately, it doesn’t seem to leverage the signal-slot mechanism common to QWidgets applications.

This is a problem, since it means that reusing a large codebase might be a little more involved than just doing a seamless transition from writing QWidgets to slowling rolling out side-by-side Qt Quick panels.

Nonetheless, it’s very impressive and it’s definitely worth taking a look at if you need to develop a quick desktop UI in 10~15 minutes.

Implementing a Waypoint System in the Vortex Engine

This week, we’re back to developing new native components for the Vortex Engine. For this week, the objective was to develop a “Waypoint Tween” component that moves an entity’s position between a series of points.

The new Waypoint Tween Component is used to move a 3D model between four points.
The new Waypoint Tween Component is used to move a 3D model between four points.

There are two main aspects to bringing the system to life: the component implementation in the Vortex Engine and the UI implementation in the Vortex Editor.

At the engine level, the system is implemented via a C++ component that is very fast at the time of performing the math necessary to interpolate point positions based on time and speed.

At the editor level, due to the flexibility of this system, exposing its properties actually required a significant amount of UI work. In this first iteration, points can be specified directly in the component properties of the inspector panel. Later in the game, the the plan is to allow the user to specify the points as actual entities in the world and then reference them.

Animation and Movement

Now, in the animated GIF above, it can be seen that the 3D model is not only moving between the specified points, but it also appears as if the model is running between these.

There are two factors at play here to implement this effect: the MD2 Animation and the Waypoint Tween.

The MD2 Animation and Waypoint Tween Components.
The MD2 Animation and Waypoint Tween Components.

When enabled, the Animate Orientation property of the Waypoint Tween component orients the 3D model so that it’s looking towards the direction of the point it’s moving to.

This propery is optional, as there are some cases where this could be undesirable, for instance, imagine wanting to implement a conveyor belt that moves boxes on top of it. It would look weird if boxes magically rotated on their Oy axis. For a character, on the other hand, it makes complete sense that the model be oriented towards the point it’s moving to.

Regarding the run animation, if you have been following our series on the Vortex Editor, you will remember that when instantiated by the engine, MD2 Models automatically include an MD2 Animation Component that handles everything related to animating the entity.

More details can be found in the post where we detail how MD2 support is implemented, but the idea is that we set the animation to looping “run”.

When we put it all together, we get an MD2 model that is playing a run animation as it patrols between different waypoints in the 3D world.

Waypoint System in Practice

So how can the waypoint system be used in practice? I envision two uses for the waypoint system.

The first one is for environment building. Under this scenario, the component system is used to animate objects in the background of the scene. Case in point, the conveyor belt system described above.

The second use, which might be a little more involved, would be to offload work from scripts. The efficient C++ implementation of the waypoint system would allow a component developed in a scripting language to have an entity move between different points without having to do the math calculations itself.

The dynamic nature of the component would allow this script to add and remove points, as well as interrupting the system at any time to perform other tasks. An example would be a monster that uses the waypoint system to patrol an area of the scene and then, when it’s detected that a player is close to the monster, the system is interrupted and a different system takes over, perhaps to attack the player.

In closing

I had a lot of fun implementing this system, as it brings a lot of options to the table in terms of visually building animated worlds for the Vortex Engine.

The plan for next week is to continue working on the Editor. There is some technical debt on the UI I want to address in order to improve the experience and there are also a couple of extra components I want to implement before moving on to other tasks.

As usual, stay tuned for more!

OpenGL from a 10,000ft view

This month marks 10 years since I started learning and using OpenGL, and what a ride has it been! I started off with basic OpenGL 1.1 back in the University under the advisory of my mentor and ex-Googler Gabriel Gambetta, then moving on to the programmable pipeline by teaching myself how to code shaders and then riding the wave of the mobile revolution with OpenGL ES on the iPhone.

OpenGL Logo. Copyright (C) Khronos Group.
OpenGL Logo. Copyright (C) Khronos Group.

As part of this process, I’ve also had the privilege of teaching OpenGL to others at one of the most important private universities back home. This exposed me to learn evermore about the API and improve my skills.

Rather than doing a retrospective post to commemorate the date, I though about doing something different. In this post I’m going to explain how OpenGL works from a 10,000ft view. I will lay the main concepts of how vertex and triangle data gets converted into pixels on the screen and, in the process, explain how a video card works. Let’s get started!

What is OpenGL

At the most basic level, OpenGL can be seen as a C API that allows a program to talk to the video driver and request operations or commands to be performed on the system’s video card.

Titan X GPU by NVIDIA. Image courtesy of TechPowerup.com
Titan X GPU by NVIDIA. Image courtesy of TechPowerup.com

So what is a video card? A video card (or GPU) is a special-purpose parallel computer, excellent at executing a list of instructions on multiple data at the same time. A video card has its own processors, its own memory and it’s good at performing one particular set of operations (namely, linear algebra) very very fast.

What OpenGL gives us is access to this device through a client/server metaphor where our program is the client that “uploads” commands and data to the video driver. The video driver, which is the server in this metaphor, buffers this data and, when the time is right, it will send it to the video card to execute it.

Our program’s “instance” inside the video driver is known as the OpenGL Context. The Context is our program’s counterpart in the video card and it holds all the data we’ve uploaded (including compiled shader programs) as well as a large set of global variables that control the graphics pipeline configuration. These variables comprise the OpenGL State and they’re the reason OpenGL is usually seen as a State Machine.

The Graphics Pipeline (Simplified)

Remember how I mentioned that the video card excels at performing a limited set of operations very very fast? Well, what are these operations?

The way the video card works is that data is fed into it through OpenGL and then it goes through a series of fixed steps that operate on it to generate, ultimately, a series of pixels. It is the job of the video card to determine which pixels should be painted and using which color. And that’s really all the video card does at the end of the day: paint pixels with colors.

The following image, taken from Wikipedia, shows a simplified view of the data processing pipeline that OpenGL defines.

A simplified view of the OpenGL pipeline. Source: Wikipedia
A simplified view of the OpenGL pipeline. Source: Wikipedia

In this image, imagine the data coming from the user program. This diagram shows what is happening to this data inside the video card.

Here:

  1. Per-Vertex Operations: are operations that are applied to the vertex data supplied. This data can be coming in hot from main system memory or be already stored in video card memory. Here is where vertices are transformed from the format they were originally specified in into something we can draw on the screen. The most common scenario here is to take a piece of geometry from Object Space (the coordinate system the artist used), place it in front of a virtual “camera”, apply a perspective projection and adjust its coordinates. In the old days, here is where all transformation and lighting would take place. Nowadays, when a shader program is active, this stage is implemented by the user’s Vertex Shader.
  2. Primitive Assembly: here’s where OpenGL actually assembles whatever vertex data we supplied into its basic primitives (Triangle, Triangle Strip, Triangle Fan, Lines or Points, among others). This is important for the next step.
  3. Rasterization: is the process of taking a primitive and generating discrete pixels. It amounts to, given a shape, determining what pixels said shape covers. If the conditions are right, texture memory can be sampled here to speed up the texturing process.
  4. Per-Fragment Operations: are operations performed on would-be pixels. If there is a shader program active, this is implemented by the user in the fragment shader. Texture mapping operations take place here, as well as (usually) shading and any other operations that the user can control. After this stage, a number of operations take place based on the State Machine. These operations include depth testing, alpha testing and blend operations.
  5. Framebuffer: finally, this is the image we are rendering our pixels to. It is normally the screen, but we can also define a texture or a Render Target object that we could then sample to implement more complex effects. Shadow Mapping is a great example of this.

Sample OpenGL Program

Having taken a (very quick) look at OpenGL, let’s see what a simple OpenGL program might look like.

We are going to draw a colored triangle on the screen using a very simple script that shows the basic interaction between our program and the video card through OpenGL.

A colored triangle drawn by a simple program exercising the OpenGL API.
A colored triangle drawn by a simple program exercising the OpenGL API.

I’m using Python because I find that its super simple syntax helps put the focus on OpenGL. OpenGL is a C API however and, in production code, when working with OpenGL, we tend to use C or C++. There are other bindings available for Java and C# as well, but -mind you- these just marshal the calls into C and invoke the API directly.

This script can be divided in roughly 3 parts: initializing the window and OpenGL context, declaring our data to feed to the video card and a simple event loop. Don’t worry, I’ll break it down in the next section.

#!/opt/local/bin/python2.6
import pygame
from OpenGL.GL import *

def main():
	# Boilerplate code to get a window with a valid
	# OpenGL Context
	w, h = 600, 600
	pygame.init()
	pygame.display.set_caption("Simple OpenGL Example")
	scr = pygame.display.set_mode((w,h), pygame.OPENGL|pygame.DOUBLEBUF)
	
	glClearColor(0.2, 0.2, 0.2, 0.0)

	# Data that we are going to feed to the video card:
	vertices = [ \
		-1.0, -1.0, 0.0, \
		1.0, -1.0, 0.0,  \
		0.0, 1.0, 0.0 ]

	colors = [ \
		1.0, 0.0, 0.0, 1.0, \
		0.0, 1.0, 0.0, 1.0, \
		0.0, 0.0, 1.0, 1.0 ]
			

	# Here's the game loop, all our program does is
	# draw to a buffer, then show that buffer to the
	# user and read her input.
	done = False
	while not done:
		# Clear the framebuffer
		glClear(GL_COLOR_BUFFER_BIT)
		
		# Supply the video driver a pointer to our
		# data to be drawn:

		glVertexPointer(3, GL_FLOAT, 0, vertices)
		glEnableClientState(GL_VERTEX_ARRAY)

		glColorPointer(4, GL_FLOAT, 0, colors)
		glEnableClientState(GL_COLOR_ARRAY)

		# Now that all data has been set, we tell
		# OpenGL to draw it, and which primitive
		# our data describes. This will be used
		# at the primitive assembly stage.
		glDrawArrays(GL_TRIANGLES, 0, 3)

		# Clean up
		glDisableClientState(GL_COLOR_ARRAY)
		glDisableClientState(GL_VERTEX_ARRAY)
		
		# Show the framebuffer
		pygame.display.flip()

		# Process input:
		for evt in pygame.event.get():
			if evt.type == pygame.QUIT:
				done = True

if __name__ == "__main__":
	main()

If you’re familiar with OpenGL, you’ll notice I’m using mostly OpenGL 1.1 here. I find it’s a simple way to show the basic idea of how data is fed into the video card. Production-grade OpenGL will no doubt prefer to buffer data on the GPU and leverage shaders and other advanced rendering techniques to efficiently render a scene composed of thousands of triangles.

Also note that the data is in Python list objects and, therefore, the pyopengl biding is doing a lot of work behind the scenes here to convert it into the float arrays we need to supply to the video card.

In production code we would never do this, however, doing anything more efficient would require to start fiddling with pointer syntax that would undoubtedly make the code harder to read.

Putting it all together

Now, if you’re unfamiliar with OpenGL code, let’s see how our program is handled by the Graphics Pipeline.

		# Supply the video driver a pointer to our
		# data to be drawn:

		glVertexPointer(3, GL_FLOAT, 0, vertices)
		glEnableClientState(GL_VERTEX_ARRAY)

		glColorPointer(4, GL_FLOAT, 0, colors)
		glEnableClientState(GL_COLOR_ARRAY)

We start off by providing an array of vertices and colors to OpenGL, as well as a description of how this data is to be interpreted. Our calls to glVertexPointer and glColorPointer (in real life you would use glVertexAttribPointer instead) tells OpenGL how our numbers are to be interpreted. In the case of the vertex array, we say that each vertex is composed by 3 floats.

glEnableClientState is a function that tells OpenGL that it’s safe to read from the supplied array at the time of drawing.

		# Now that all data has been set, we tell
		# OpenGL to draw it, and which primitive
		# our data describes. This will be used
		# at the primitive assembly stage.
		glDrawArrays(GL_TRIANGLES, 0, 3)

glDrawArrays is the actual function that tells OpenGL to draw, and what to draw. In this case, we are telling it to draw triangles out of the data we’ve supplied.

After this call, vertex data will go through the per-vertex operations stage and then be handed off to the primitive assembly, which will effectively interpret the vertices as forming part of one (or more) triangles.

Next, the rasterization stage will determine which pixels on the framebuffer would be covered by our triangle and emit these pixels, which will then go to the per-fragment operations stage. The rasterization stage is also responsible for interpolating vertex data over the triangle, this is why we get a color degrade effect spanning the area of the triangle – it’s simply the interpolation of the colors at the three vertices.

This is all happening inside the video card in parallel to our event loop, that’s why we have no source code here to show.

		# Show the framebuffer
		pygame.display.flip()

Finally, after everything is said and done, the video card writes the resulting pixels on the framebuffer, and we then make it visible to the user by flipping the buffers.

In Closing and Future Thoughts

We’ve barely scratched the surface of what OpenGL is and can do. OpenGL is a big API that has been around for 20+ years and has been adding lots of new features as video card and video games companies continue to push for ever more realistic graphics.

Now, while 20+ years of backwards compatibility allow running old code almost unmodified on modern systems, design decisions accumulated over time tend to obscure the optimal path to performance, as well as to impose restrictions on applications that would benefit for more direct control of the video card.

Vulkan logo. (tm) Khronos Group.
Vulkan logo. ™ Khronos Group.

These points, made by the Khronos group itself, have led to the design and development of a new graphics API standard called Vulkan. Vulkan is a break from the past that provides a slimmed down API more suitable for modern day hardware. In particular multi-threaded and mobile applications.

OpenGL, however, is not going away any time soon, and the plan for the Khronos group, at least for the time being, appears to be to offer both APIs side by side and let the developers choose the one more suitable to their problem at hand.

Additionally, with Apple focusing on Metal and Microsoft on DX12, OpenGL (in particular OpenGL ES 2.0) remains the only truly cross-platform API that can target almost every relevant device on the planet, be it an iPhone, an Android phone, a Windows PC, GNU/Linux or Mac.

Finally, the large body of knowledge surrounding 20+ years of OpenGL being around, coupled with OpenGL’s relative “simplicity” when compared to a lower-level API such as Vulkan, may make it a more interesting candidate for students learning their first hardware-accelerated 3D API.

As time marches on, OpenGL remains a strong contender, capable of pushing anything from AAA games (like Doom) to modern-day mobile 3D graphics and everything in-between. It is an API that has stood the test of time, and will continue to do so for many years to come.

Building a Vortex Engine scene using Vortex Editor

This week I’ve been working in different several UX enhancements for the Editor to improve the scene building experience in general.

Building a scene for the Vortex Engine using the Vortex Editor.
Building a scene for the Vortex Engine using the Vortex Editor.

One thing to notice in the above image is the new horizontal grid on the floor. Each cell in the floor is exactly 1×1 world unit in size and it will help tremendously to keep a sense of scale and distance between objects when building the scene.

What I like about this image is how it shows the different components of the Editor coming together to make the Vortex Engine easier to use than ever. The more complete the Editor is, the quicker we can move on to start implementing the new rendering backend for Vortex, codename “V3”.

Stay tuned for more weekly updates!

Off-the-Shelf Native Components

At the end of last week’s article, we signed off mentioning the possibility of providing two component APIs for the Entity Component System of the Vortex Editor. Although I’m still very much in line with this idea, the more I lean towards a scripting language, the more I am concerned about performance.

I am worried that scripted components might not be able to meet the quota of updating in less that 16.6 ms once we have a few of them in any given scene. 16.6 ms is the time slice we have to update all components and render the scene on the screen in order to achieve 60 FPS.

A native component, on the other hand, could benefit from all the advantages of going through an optimizer compiler and requiring no parameter marshaling when interfacing with Vortex’s C++ API.

In the light of this, what I’ve been considering is providing a rich set of native components that are part of the Vortex API and that a script can attach to Entities. This would provide the advantage of being able to dynamically add and remove behavior to Entities, while still hitting native performance during the update-render loop.

Of course, I’m still very interested in providing a means to create components through scripts, but perhaps these can be used mostly for exploring ideas, and if performance becomes a problem, then the best practice would be to move to a native implementation of the component.

This week, as a dry-run of the Component API, I’ve implemented a simple Rotation Component that will make any Entity spin on its local axes. A custom UI widget in the Vortex Editor allows configuring the speed and rotation axis of the component in realtime, while visualizing the animation in the 3D view.

I’m pretty happy with the results, and I believe this is the way to move forward with the Entity Component System of the Engine.

Next week I will continue working on the Editor and Engine, trying to maximize what you can do out of the box. Stay tuned for more!

Entity Components in the Vortex-Engine

Last week, we got to a point were we can drag and drop an MD2 file into the 3D view and it automatically instantiate an Entity representing the 3D model. Now, because MD2 models support keyframe animation, suppose we wanted to somehow provide a way for this model to be correctly animated…

An example component driving an keyframe animation for a 3D model.
An example component driving an keyframe animation for a 3D model.

The Problem

One way to go about adding animations would be to implement the MD2 keyframe logic inside the Entity class, so that I need only send an update message to the Entity and it will update the vertex data it holds.

Because I can also instantiate Entities from static OBJ files, it doesn’t make much sense for this logic to be part of the Entity. So what could we do then?

Well, again, we could create a sub-class of Entity with the logic that we need for the animation. The problem with this approach is that sooner rather than later, we will have a huge hierarchy of classes where we will need to constantly refactor and add intermediate super-classes in order to have Entities that share some behaviors (but not others) in a way that enables code reuse.

The Entity-Component-System Model

Wouldn’t it be nice if, instead of extending and inheriting code, I could somehow cherry-pick the properties and functionality that I want for my Entities from a set of prebuilt components? Not only could we build a set of off-the-shelf tested and reusable components that we could mix and match, but also, we would be favoring composition over inheritance, effectively eliminating the need for sub-classing altogether!

This is the main idea behind the Entity-Component-System architecture of modern Game Engines, and it has been the main focus of the work in the Engine this week.

The brand new Component architecture allows adding properties (and even behavior!) dynamically to Entities in a flexible way that prevents coupling and encourages code reuse. The idea behind is simple enough: components that are added to entities will get regular update() calls (once per frame) that can then be used to affect the Entity they are attached to or the world around them.

In the image above, the “Knight” Entity has a Md2 Animation Component. This component has its update() function set so that it takes care of updating the Entity’s vertex data according to the animation it’s playing. It can also expose, by means of the Inspector, an UI that the user can access to set the currently playing animation and its properties.

Native vs Scriptable Components

At this time, as the new Component concept is introduced, the only component that the Engine provides out of the box is the Md2 Animation Component. The idea from this point on, however, is for all extra functionality that affects Entities to be implemented as components.

I envision that we will end up supporting two types of components in the future: native components (developed in C, C++ and Objective-C against the Vortex API) and scripted components.

If you have been paying attention to the Editor screenshots I’ve been uploading, you will have noticed that since day one we have had a “scripting” tab sitting alongside the 3D View tab. This is because the idea of exposing the Engine through a scripting interface is something that I’ve been interested in for a very long time. With the Component API taking shape, I think that allowing components to be developed through a scripting language is going to be a feasible option that will open the door for implementing tonnes of new features.

There is still a lot of work ahead for both the Editor and the Engine, but I’m sure that the next version of Vortex, “V3”, is going to be the most significant major update within the 6+ years of the history of the Engine. Stay tuned for more!

MD2 Entities

This week I’ve been working on revamping the old OBJ and MD2 importers to support the new Entity system. I originally wrote these loaders back in 2010 and, although the parser/loader code worked without any changes, I decided to revamp the external interfaces to make it easier to select the correct loader depending on the file type.

Importing a MD2 model with texture into the Vortex Editor.
Importing a MD2 model with texture into the Vortex Editor.

The image above shows how easy it is now to bring a Quake II MD2 model into the editor. We start by importing the files into the asset library. Then drag and drop the MD2 file from the library into the 3D world and -voila- a new Entity is created.

Using the properties panel, we can adjust the Entity’s transformation and set the texture file for the material. This process is definitely much simpler than than what it used to be, back when we had to load the model through code and then feed its vertex arrays into the GL.

The plan for next week is to wrap up MD2 support by implementing better control for the format’s animations, and then we will be off to new better things!

Stay tuned for more!

Bring in the Assets!

This week I worked on a way to bring in external assets into the Editor.

Importing an external image into the editor and texturing a cube.
Importing an external image into the editor and texturing a cube.

The way this works is by means of a new import tool in the UI. Import allows the user to select any set of files to copy into the project’s resource directory. Once in the project, resources are shown in the new Library View from where they can be accessed.

What can you do with the assets you bring into the project?

For 3D assets, when dragged onto the 3D view the Editor will instantiate a new entity. This new entity will appear in the entities list view and can be manipulated like any other entity.

For Image assets, these can be assigned to materials to change the appearance of entities in the scene. The figure above show how we can bring a JPG file into the Editor and use it to texture a box we create from the Entity menu.

Designing the Asset Library UX

I tried different designs for the library view, starting with a simple list that would show the asset’s name, type and preview, but I ultimately decided to simplify the interaction and settled for a tree view.

It was hard to throw away the initial tabular design I implemented, but I believe going with the tree view was ultimately the right choice.

Once I had the library view, the first thing I wanted to try was whether dragging assets from it onto the 3D world and other widgets would be feasible. I thought this feature would be troublesome to land, however, I was very pleased to see the implementation come together rather quickly.

Drag and drop is a great interaction because it prevents taking the user out of context when she wants to assign a value to a widget or other UI element.

Putting the picture together

If we take a step back, we can now start to see how these features are coming together to help bring the WYSIWYG experience to the Editor.

If you recall from our first post in the Vortex Editor series (click here to quickly jump there), one of the main objectives of this project was to allow simplifying the way we assemble scenes for the Vortex Engine.

Having now a Resource Library and a way to visually edit Entity properties, look at how much simpler it is to create a scene from scratch:

  1. First, we start the Editor, click File -> Import Assets.
  2. Then, we select the assets to bring into the project (for instance, a few obj files).
  3. After this, we simply drag the imported obj file from the Library onto the 3D view. This will create a new Entity.
  4. And Finally, we adjust the new Entity’s properties (position, rotation, scale) to our liking and drag image assets to texture them. That’s it!

The whole process is very visual, allowing the user to see the scene at all times and (hopefully) unleashing her artistic creativity by allowing quick improvisation and trial and error. It feels much more fun than having to be tweaking a series of float values intermixed with C++ code and recompiling every time.

Having reached this initial milestone, the plan for next week will be mostly polish work for these features, improving project resource management and further cleaning up the internal architecture.

Stay tuned for more!

Entity transform via the Properties Panel

This week I finished implementing full edition of Entity transformations by means of the properties panel.

Five boxes showcasing different transforms set via the properties panel.
Five boxes showcasing different transforms set via the properties panel.

In the above image, five box instances were dynamically created and then moved, rotated and scaled using nothing but the transform panel. This is a big milestone that will allow, from now on, assemble complete scenes to be rendered with the engine without having to tinker with the transformation values in code.

Now, a common piece of feedback that I’ve been receiving during this series is whether the editor is capable of rendering something other than boxes. Because the editor is powered by the full Vortex Engine, it can load and render a number of file formats out of the box.

With the ability to assemble scenes dynamically, we can now start bringing in external geometry in the form of entities and build our setup from them. This coming week I’m going to be working on better Entity support for external meshes and start bringing them into the editor.

Stay tuned for more!

Transform Inspection in the Vortex Editor

This week I finished implementing the Transform Panel of the properties view. This is a huge step towards providing a WYSIWYG interface to the Vortex Engine!

The now-complete properties panel. Notice how rotation and scale are now correctly displayed.
The now-complete properties panel. Notice how rotation and scale are now correctly displayed.

This work encompassed being able to select any entity and inspect its properties, namely, its position, rotation and scale.

As I described in my previous post, I reworked the way entities store their transforms in order to be able to keep properties separate, only combining them for rendering purposes. Moving entities to the new Transform construct was simpler that I had anticipated, as entities encapsulate and manage the scenegraph nodes that the Vortex renderers expect.

The next step now in the Properties Panel saga is to add editing capabilities to the UI, so that modifying the values presented in the transform will alter the entities in the 3D world.

The only major issue that I see here is that, as I start to add more editing capabilities to the UI, it becomes ever more pressing to start implementing the undo/redo stack. It will be necessary as the Editor becomes more powerful.

This begs the question of the stack commands going through a centralized front controller, as opposed to having each UI component modify entities on its own. Now, a centralized component for actions must be carefully designed, as it can also provide the native backend for an (eventual) engine scripting system – something I’ve also started to research, but that requires a post on its own : )

The plan for this week is then: add “write” capabilities to the properties panel and start implementing the undo/redo stack. Stay tuned for more!