Transform Inspection in the Vortex Editor

This week I finished implementing the Transform Panel of the properties view. This is a huge step towards providing a WYSIWYG interface to the Vortex Engine!

The now-complete properties panel. Notice how rotation and scale are now correctly displayed.
The now-complete properties panel. Notice how rotation and scale are now correctly displayed.

This work encompassed being able to select any entity and inspect its properties, namely, its position, rotation and scale.

As I described in my previous post, I reworked the way entities store their transforms in order to be able to keep properties separate, only combining them for rendering purposes. Moving entities to the new Transform construct was simpler that I had anticipated, as entities encapsulate and manage the scenegraph nodes that the Vortex renderers expect.

The next step now in the Properties Panel saga is to add editing capabilities to the UI, so that modifying the values presented in the transform will alter the entities in the 3D world.

The only major issue that I see here is that, as I start to add more editing capabilities to the UI, it becomes ever more pressing to start implementing the undo/redo stack. It will be necessary as the Editor becomes more powerful.

This begs the question of the stack commands going through a centralized front controller, as opposed to having each UI component modify entities on its own. Now, a centralized component for actions must be carefully designed, as it can also provide the native backend for an (eventual) engine scripting system – something I’ve also started to research, but that requires a post on its own : )

The plan for this week is then: add “write” capabilities to the properties panel and start implementing the undo/redo stack. Stay tuned for more!

Visual Editing of Transformations through the Vortex Editor UI

This week I started implementing the properties panel (sometimes called the “inspector” panel) for the Vortex Editor.

The redesigned Transformation Panel in the Vortex Editor vertical slice.
The redesigned Transformation Panel in the Vortex Editor vertical slice.

I originally wanted to go with a table design for the UI. I though it would give me the flexibility of adding as many editable entries as necessary. I even implemented a mockup in the UI that can be seen in previous screenshots of the Editor.

The problem that I found however was that both, the difference in font and text spacing, between the left and right panels of the UI made the layout look uneven. I knew I wanted something more symmetrical for what’s essentially the “home” view of the Editor, so I came up with a new design that can be seen in the image above.

Here, the Transform Panel consists in a custom UI component that is created dynamically and parented to the docked panel on the right. This provides a more uniform layout with two advantages: first, any number of property panels can be created and they will be nicely stacked one after the other in the panel (this will be useful in the future). Second, because the properties panel is detachable, it still allows the user to customize the Editor layout to her liking.

Now, one minor roadblock I’ve encountered from the engine standpoint for fully realizing this idea is the way Vortex currently represents Entity transformations.

All transformations in Vortex are represented as a 4×4 Matrix. The driving force behind this decision was to avoid having to convert between rotation representations at render time, thus saving some time down the line at during any scenegraph traversal passes.

So what does a generic transformation matrix look like in Vortex?

Matrices are a 4×4 list of values representing values in the homogeneous coordinate system:

  \begin{array}{cccc}  sr_{00} & r_{01} & r_{02} & t_{x} \\  r_{10} & sr_{11} & r_{12} & t_{y} \\  r_{20} & r_{21} & sr_{33} & t_{z} \\  0 & 0 & 0 & 1  \end{array}

 

In this matrix, (tx, ty, tz) correspond to the translation component of the Entity and we can easily extract this information to populate the Transformation Panel. But what happens with the rotation and scale?

Rotation and Scale are mixed in together in the matrix (represented by the overlapping sr components), so we can’t really extract the original scale and rotation that generated this matrix. This means that we will only be able to show and edit the position of the Entity and not its rotation or scale.

This is a limitation that has to be lifted.

The plan is to provide a higher-level contraption for describing transformations. Indeed a Transform class that will keep separate tabs for position, rotation and scale but will also provide a convenience method for computing on-demand the transform matrix that this transform represents.

A tentative interface for the Transform class could be:

namespace vtx {
class Transform
{
  public:
    void setPosition( float x, float y, float z );
    void setRotationEuler( float rx, float ry, float rz );
    void setScale( float sx, float sy, float sz );

    vtx::Matrix4 matrix() const; // used for the rendering pass

    // Other getter methods omitted
};
}

I think this will be a good change for the Engine. Working with Entities using a position-rotation-scale mindset instead of having to the deal with the cognitive overhead of thinking in terms of representing these as matrix operations will help users be more productive (and save precious keystrokes).

This coming week I will be working on finalizing this implementation and finally exposing full Entity transformation control through the UI. Stay tuned!

The GLSL Shader Editor

This week we take a break from work in the Vortex Editor to revisit an older personal project of mine: the GLSL Shader Editor, a custom editor for OpenGL shaders.

The UI of my custom shader editor.
The UI of my custom shader editor.

The idea of the editor was to allow very fast shader iteration times by providing an area where a shader program could be written and then, by simply pressing Cmd+B (Ctrl+B on Windows), the shader source would be complied and hot-loaded into the running application.

This concept of hot-loading allowed seeing the results of the new shading instantly, without having to stop the app and without even having to save the shader source files. This allowed for very fast turn-around times for experimenting with shader ideas.

As the image above shows, the UI was divided in two main areas: an Edit View and a Render View.

The Edit View consisted in a tabbed component with two text areas. These text areas (named “Vertex” and “Fragment”) are where you could write your custom vertex and fragment shaders respectively. The contents of these two would be the shader source that was be compiled and linked into a shader program.

The shader program would be compiled by pressing Cmd+B and, if no errors were found, then it would be hot-loaded and used to shade the model displayed in the Render View.

The status bar (showing “OK” in the image), would display any shader compilation errors as reported by the video driver.

The application had a number of built-in primitives and it also allowed importing in models in the OBJ format. It was developed on Ubuntu Linux and supported MS Windows and OS X on a number of different video cards.

Application Features

  • Built entirely in C++.
  • Supports Desktop OpenGL 2.0.
  • Qt GUI.
  • Supported platforms: (Ubuntu) Linux, MS Windows, OSX.
  • Diverse set of visual primitives and OBJ import support.
  • Very efficient turn-around times by hot-loading the shader dynamically – no need to save files!
  • GLSL syntax highlighting.
  • Docked, customizable UI.

Interestingly, this project was developed at around the same time that I got started with the Vortex Engine, therefore, it does not use any of Vortex’s code. This means that all shader compiling and loading, as well as all rendering was developed from scratch for this project.

I’ve added a project page for this application (under Personal Projects in the blog menu). I’ve also redesigned the menu to list all different personal projects that I’ve either worked on or that I’m currently working on, so please feel free to check it out.

Next week, we’ll be going back to the Vortex Editor! Stay tuned for more!

Basic Entity Instancing and Positioning

This week, I’ve continued working on the Editor, adding Entity instancing through the UI and (simple) position editing.

An animated gif showing very basic entity creation and positioning in the Vortex Editor.
An animated gif showing very basic entity creation and positioning in the Vortex Editor.

The image above features a rough preview that helps show how any number of Entities can be created now. The next steps will be to refine this to add Mouse-based selection, as well as a more comprehensive UI that allows for finer control over Entity position, rotation and scale.

Once these and a few more Editor features are in, I will create a higher quality video showcasing the creation of an environment by means of Entity editing.

Stay tuned for more!

Entity Management in Vortex

This week I started working on the foundation for entity management in the Vortex Editor.

“Entities” are a new concept I’m introducing in the next version of the Vortex Engine (V3), whose design responds to the larger effort of simplifying the engine’s API.

Screenshot showing updated UI for Entity Management.
The Updated UI of the Editor.

Entities in Vortex are high level objects that have a transformation, a material, a mesh and (of course) a name.

Down the line, the plan is that entities expose more properties. Custom shading and executable code are strong candidates that come to mind. As part of the Vortex Editor vertical slice, however, basic properties for placing entities in 3D space and drawing them are all we need.

Entities are also first-class citizens in the Editor, so I reworked the Editor’s UI to add two new panels: an entity list view and a property editor. The image below shows a scene setup with two box entities. Note their names listed in the panel on the left.

Screenshot showing updated UI for Entity Management.
Entity management in the Vortex Editor.

Because I wanted the contents of the entity list to be taken directly from the entity management component in the engine, I had to refactor the internal architecture of the Editor a bit. As it was standing before, the OpenGL View was the one handling the Editor Controller and -by extension- the engine.

This created a weird situation where the entity list view’s controller would have to go through the GL View to query the existing entities. This had to be fixed.

Thankfully, because the Editor Controller was fully encapsulating the engine, it was easy to move the instance of this class upwards in the object hierarchy without introducing undesired dependencies to Vortex’s core in UI classes.

All in all, the whole rework didn’t take as big of an effort as I had originally anticipated, and I’m pretty satisfied with the results.

The next step now is to expose entity instancing to the Editor UI! Stay tuned for more!

Designing the Editor Architecture

Last week I used the (very little) free time that I had to work on the internal architecture of the Editor and how it’s going to interact with the Vortex Engine.

In general terms, the plan is to have all UI interactions be well-defined and go to a Front Controller object that’s going to be responsible for driving the engine. This Front Controller, by definition, will be a one-stop shop for the entire implementation behind the UI and it will also, at a later stage, provide higher-granularity control of the engine.

Vortex Framebuffer Object Support: a knight is rendered on a texture that is then mapped on a cube. All rendering is done on the GPU, avoiding expensive copies to RAM.
Vortex Framebuffer Object Support: a knight is rendered on a texture that is then mapped on a cube. All rendering is done on the GPU, avoiding expensive copies to RAM.

Other components I’ve been designing include an undo/redo stack (which is super important for an editor application) and a scripting API. It’s still early for both these components, but I think it’s better if the design supports these from early on as opposed to trying to tack them on to the Editor at a later stage.

Finally, last week I took the time to bootstrap a higher OpenGL version on Windows. The Editor now has access to full OpenGL on this platform. This is a significant milestone that opens the door for bringing in to Windows Vortex’s advanced rendering techniques, such as FBO objects as depicted in the image above.

I’ve only got a short update for this week. Stay tuned for more to come : )

Vortex Editor .plan

Not too long ago, I started working on an Editor for the Vortex Engine. I have been toying with the idea for years and I finally decided to get started. Not only because it is going to be an interesting challenge, but also because I feel it’s a good way to improve the development workflow when using the engine.

A very early screenshot depicting a scene with a Box entity. No lighting, no mipmapping, no AA. “Crate” texture image courtesy of lighthouse3d.com
A very early screenshot depicting a scene with a Box entity. No lighting, no mipmapping, no AA. “Crate” texture image courtesy of lighthouse3d.com

Using the Engine Today

Let’s take a look at the way I can build an App today with the Vortex Engine. First, I would create a new Application (be it a Linux, Mac or iOS App). Then, I would link against the engine, and then finally, I would create a scene through the Vortex API manually.

Now, while this approach certainly works and even plays as one of Vortex’s strengths by allowing you to integrate the engine into any App without taking over the application loop, it does become cumbersome to create the scene programatically.

The reason is that this process usually amounts to repeating a series of steps for every scene in the App:

  1. Start by taking a first stab in the dark.
  2. Build and run the App.
  3. Realize that you want to change the scene layout.
  4. Go back to the code, change it.
  5. Rebuild and re-run the App.
  6. Repeat from step 3 until you’re satisfied with the results.

The idea of the Editor is to tackle this problem head-on. With the Editor, you will be able “see” the scene you are building, tweak it visually and then save it as a package that can be loaded by the engine.

Bringing Vortex to Windows

Starting a new project for the Editor begs the question of which platforms this App shall run on. The Editor will be a desktop App, so ideally, it would work on all three major desktop platforms: Windows, Linux and Mac.

Now, there is no point in making a new renderer for the Editor, as we want the scene we see in it to be as close as possible to what the final user Apps will render. What this means is that the Editor needs to run the engine.

Portability has always been one of the key tenets of the Vortex Engine, so this is the perfect opportunity to bring the engine to Windows, a platform it’s never run on before.

Bringing to Windows a codebase that was born on Linux and then expanded to support Mac and iOS is the ultimate test for source-code level portability. Once finished, the end result will be a flexible codebase that is also more adherent to the standard.

So far, the two main challenges in building the engine on Windows have been: adapting the codebase for building under the MSVC compiler and Windows’ barebones support for OpenGL.

Building on MSVC

Although Vortex is standard C++ and it builds with both GCC and Clang, building it with MSVC required a few changes here and there to conform better to its front end.

This also meant reconsidering some dependencies of the engine to allow for a non-POSIX environment. Thankfully, the move to C++11 has already helped replace some UNIX-specific functions with now-standard equivalents.

OpenGL on Windows

Regarding OpenGL support, the windowing toolkit I’ve chosen to implement the UI in has proven to be more of a problem than a solution. At the time of writing, and mostly because I’m trying to hit a high velocity building the Editor, I haven’t taken the time to bootstrap anything beyond OpenGL 1.1 support.

This would be a problem, however, Vortex’s Dual Pipeline support, as I first described in this post back from 2011, has proven essential by allowing the engine to scale down to OpenGL 1.1.

Dual Pipeline support: a Comparison of the Rendering Pipelines available in Vortex Engine. The image on the left represent the Fixed Pipeline. The image on the right represents the Programmable Pipeline.
Dual Pipeline support: a Comparison of the Rendering Pipelines available in Vortex Engine. The image on the left represent the Fixed Pipeline. The image on the right represents the Programmable Pipeline.

The plan is to move forward with the basic Editor functionality and then drop in the programmable pipeline renderer later in the game, retiring the fixed one.

It’s quite amazing to see the fixed pipeline renderer, written about 6 years ago, running unmodified on a completely new platform that it has never been tested on before. This is the true virtue of OpenGL.

In Closing

So far work is progressing nicely. As the image above shows, I have a simple proof-of-concept of the engine running inside the Editor skeleton under Windows. This is the foundation on which I will continue building the Editor App.

Stay tuned for more!

Goodbye Hypr3D

It seems the sun has set on the good old Hypr3D app. This was the first commercial app to use a licensed copy of my 3D Engine back in 2011.

Hypr3D's help page showing the viewer's recognized gestures.
Hypr3D’s help page showing the viewer’s recognized gestures.

Hypr3D rendering a reconstructed pineapple model on an iPhone.
Hypr3D rendering a reconstructed pineapple model on an iPhone.

Back in the day, the Vortex Engine was on its 1.0 version and it was pretty much done when the App was developed. Although this made using it almost a “plug and play” experience, there were some interesting problems that had to be resolved during the App development process.

On problem in particular was how to integrate a HTML UI with the Objective-C Front Controller of the App. This was way before any of the modern “hybrid app” toolchains existed and I remember I was on a trip to Seattle when I started to lay down the main concepts.

Goodbye Hypr3D, I’ll always remember the countless hours staring at the cookies reference model while debugging!

Hypr3D's model viewed rendering a reconstructed model featuring a pile of rocks on an iPad
Hypr3D’s model viewed rendering a reconstructed model featuring a pile of rocks on an iPad

I’ve created a dedicated page in the memory of Hypr3D with a short feature list and some more screenshots. You can visit it here.

Although Hypr3D may not be available anymore, Vortex Engine 1.0 is anything but dead. Stay tuned ;)

Conway’s Game of Life

This week we take a short break from 3D programming topics and go into gaming! Well, sort of…

A few weeks ago I published on my GitHub page a CUDA implementation of Conway’s Game of Life. The code is pretty simple, well in tune with the simplicity of the game.

The implementation can be found here: https://github.com/alesegovia/game-of-life.

If you are not familiar with the game, Conway’s Game of Life is a 0-player game where cells live and die on an infinite 2D grid. The life/death rules are the following, according to Wikipedia:

Every cell interacts with its eight neighbours, which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur:

  1. Any live cell with fewer than two live neighbours dies, as if caused by under-population.
  2. Any live cell with two or three live neighbours lives on to the next generation.
  3. Any live cell with more than three live neighbours dies, as if by overcrowding.
  4. Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.

Conway’s game is excellent for implementing on a GPU, as it involves analyzing the cells in the 2D grid and, what’s best, each cell’s next state depends only on the previous state of its neighbors and never on the their current state.

This means that we can spawn a GPU thread for every single cell in the board and calculate the next state in parallel.

In the published implementation, the board size is 64×64 cells, so we are effectively spawning 4,096 GPU threads to solve every iteration. We do this for one million generations.

The project has been released under a GPLv3 license, so feel free to download, build it, run it, modify it and share it with others under its terms.

If you are looking for a fun weekend project, the game could definitely use an UI. I’ll give you extra points if you can draw it using OpenGL without ever having to copy the board back from GPU memory into system memory ;-)

Enjoy!

Bump Mapping a Simple Surface

In my last post, I started discussing Bump Mapping and showed a mechanism through which we can generate a normal map from any diffuse texture. At the time, I signed off by mentioning next time I would show you how to apply a bump map on a trivial surface. This is what we are going to do today.



Bump Mapping a Door texture. Diffuse and Normal maps taken from the Duke3D HRP. Rendered using Vortex 3D Engine. (HTML5 video, a compatibility GIF version can be found here.)

 
In the video above you can see the results of the effect we are trying to achieve. This video was generated from a GIF file created using the Vortex Engine. It clearly shows the dramatic lighting effect bump mapping achieves.

Although it may seem as if this image is composed of a detailed mesh of the boss’ head, it is in fact just two triangles. If you could see the image from the side, you’d see it’s completely flat! The illusion of curvature and depth is generated by applying per-pixel lighting on a bump mapped surface.

How does bump mapping work? Our algorithm will take as input two images: the diffuse map and the bump map. The diffuse map is just the colors of each pixel in the image, whereas the bump map consists in an encoded set of per-pixel normals that we will use to affect our lighting equation.

Here’s the diffuse map of the Boss door:

Diffuse map of the boss door. Image taken from the Duke3D HRP project.
Diffuse map of the boss door. Image taken from the Duke3D HRP project.

And here’s the bump map:

Bump map of the boss door. Image taken from the Duke3D HRP project.
Bump map of the boss door. Image taken from the Duke3D HRP project.

I’m using these images taken from the excellent Duke3D High Resolution Pack (HRP) for educational purposes. Although we could’ve generated the bump map using the technique from my previous post, this especially-tailored bump map will provide better results.

Believe it or not, there are no more input textures used! The final image was produced by applying the technique on these two. This is the reason I think bump mapping is such a game changer. This technique alone can significantly up the quality and realism of the images our renderers produce.

It is especially shocking when we compare the diffuse map with the final bump-mapped image. Even if we applied per-pixel lighting to the diffuse map in our rendering pipeline, the results would be nowhere close to what we can achieve with bump mapping. Bump mapping really makes this door “pop out” of its surface.

Bump Mapping Theory

The theory I develop in these sections is heavily based on the books Mathematics for 3D Game Programming and Computer Graphics from Eric Lengyel and More OpenGL from David Astle. You should check those books for the definitive reference on bump mapping. Here, I try to explain the concepts in simple terms.

So far, you might have noticed I’ve been mentioning that the bump map consists of the “deformed” normals that we should use when applying the lighting equation to the scene. But I haven’t mentioned how these normals are actually introduced into our lighting equations.

Remember from my previous post how we mentioned that normals are stored in the RGB image? Remember that normals close to (0,0,1) looked blueish? Well, that is because normals are stored in a coordinate system that corresponds to the image. This means that, unfortunately, we can’t just take each normal N and plug it into our lighting equation. If we call L the vector that takes each 3D point (corresponding to each fragment) to the light source, the problem here is that L and N are in different coordinate systems.

L is, of course, in camera or world space, depending on where you like doing your lighting math. But where is N? N is defined in terms of the image. That’s neither of those spaces.

Where is it then? Well, N is actually in its own coordinate system that authors refer to as “tangent space”. It’s its own coordinate system.

In order to apply per-pixel lighting using the normals coming from the bump map, we’ll have to bring all vectors to the same coordinate system. For bump mapping, we usually bring the L vector into tangent space instead of bringing all the normals back into camera/world space. It seems more convenient and should produce the same results.

Once L has been transformed, we will retrieve N from the bump map and use the Lambert equation between these two to calculate the light intensity at the fragment.

From World Space to Tangent Space

How can we convert from camera space to tangent space? Tangent space is not by itself defined in terms of anything that we can map to our mesh. So, we will have to use one additional piece of information to determine the relationship between these two spaces.

Given that our meshes are composed of triangles, we will assume the bump map is to be mapped on top of each triangle. The orientation will be given by the direction of the texture coordinates of the vertices that comprise the triangle.

This means that if we have a triangle that has an edge: (-1,1)(1,1) with texture coordinates: (0,1)(1,1), a horizontal vector (1,0) represents a vector tangent to the vertices that is aligned with the horizontal texture coordinates. We will call this the tangent.

Now, we need two more vectors in order to define the coordinate system. Well, the other vector we can use is the normal of the triangle. This vector is, by definition, perpendicular to the surface and will be perpendicular to the tangent.

The final vector we will use to define the coordinate system has to be perpendicular to both, the normal and the tangent, so we can calculate it using a cross product. There is an ongoing debate whether this vector should be called the “bitangent” or the “binormal” vector. According to Eric Lengyel the term “binormal” makes no sense from a mathematical standpoint, so we will refer to it as the “bitangent”.

Now that we have three vectors that define the tangent space, we can create a transformation matrix that takes vector L and puts it in the same coordinate system that the normals for that specific triangle. Doing this for every triangle will allow applying bump mapping on the triangle.

Responsibility – who does what

Although we can compute the bitangent and the transform matrix in our vertex shader, we will have to supply the tangent vectors as input to our shader program. Tangent vectors need to be calculated using the CPU, but (thankfully) only once. Once we have them, we supply them as an additional vertex array.

Calculating the tangent vectors is trivial for a simple surface like our door, but can become very tricky for an arbitrary mesh. The book Mathematics for 3D Game Programming and Computer Graphics provides a very convenient algorithm to do so, and is widely cited in other books and the web.

For our door, composed of the vertices:

(-1.0, -1.0, 0.0)
(1.0, -1.0, 0.0)
(1.0, 1.0, 0.0)
(-1.0, 1.0, 0.0)

Tangents will be:

(1.0, 0.0, 0.0)
(1.0, 0.0, 0.0)
(1.0, 0.0, 0.0)
(1.0, 0.0, 0.0)

Once we have the bitangents and the transformation matrix, we rotate L in the vertex shader, and pass it down to the fragment shader as a varying attribute, interpolating it over the surface of the triangle.

Our fragment shader can just take L, retrieve (and decode) N from the bump map texture and apply the Lambert equation on both of them. The rest of the fragment shading algorithm need not be changed if we are not applying specular highlights.

In Conclusion

Bump mapping is an awesome technique that greatly improves the lighting in our renderers at limited additional costs. Its implementation is not without a challenge, however.

Here are the steps necessary for applying the technique:

  • After loading the geometry, compute the per-vertex tangent vectors in the CPU.
  • Pass down the per-vertex tangents as an additional vertex array to the shader, along with the normal and other attributes.
  • In the vertex shader, compute the bitangent vector.
  • Compute the transformation matrix.
  • Compute vector L and transform it into tangent space using this matrix.
  • Interpolate L over the triangle as part of the rasterization process.
  • In the fragment shader, normalize L.
  • Retrieve the normal from the bump map by sampling the texture. Decode the RGBA values into a vector.
  • Apply the Lambert equation using N and the normalized L.
  • Finish shading the triangle as usual.

On the bright side, since this technique doesn’t require any additional shading stages, it can be implemented in both OpenGL and OpenGL ES 2.0 and run on most of today’s mobile devices.

In my next post I will show bump mapping applied to a 3D model. Stay tuned!