Shader Texture Mapping and Interleaved Arrays

Work continues on the new Rendering System for the Vortex Engine.

This week was all about implementing basic texture mapping using GL Core. The following image shows our familiar box, only this time, it’s being textured from a shader.

Texture Mapping via Interleaved Arrays and Shaders in the Vortex Engine.

Texture Mapping via Interleaved Arrays and Shaders in the Vortex Engine.

A number of changes had to go into the Renderer in order to perform texture mapping. These touched almost all the layers of the Engine and Editor.

  1. First, I wrote a “Single Texture” shader in GLSL to perform the perspective transform of the Entity’s mesh, interpolate its UV texture coordinates and sample a texture.
  2. Second, I had to change the way the retained mesh works in order to be able to send texture coordinate data to the video card (more on this later).
  3. Finally, I had to modify the Editor UI to allow selecting which shader is to be used when rendering an Entity.

So, regarding how to submit mesh texture coordinate data to the video card, because in OpenGL Core we use Vertex Buffer Objects (VBOs), it was clear that UVs had to be sent (and retained) in GPU memory too.

There are two ways to achieve this in OpenGL. One way consists in creating several VBOs so that each buffer holds an attribute of the vertex. Position data is stored in its own buffer, texture coordinates are stored their own buffer, per-vertex colors take a third buffer and so on and so forth.

There’s nothing wrong with doing things this way and it definitely works, however, there is one consideration to take into account: when we scatter our vertex data into several buffers, then every frame the GPU will have to collect all this data at render time as part of vertex processing.

I am personally not a big fan of this approach. I prefer interleaving the data myself once and then sending it to the video card in a way that’s already prepared for rendering. OpenGL is pretty flexible in this regard and it lets you interleave all the data you need in any format you may choose.

I’ve chosen to store position data first, then texture coordinate data and, finally, color data (XYZUVRGBA). The retained mesh class will be responsible for tightly packing vertex data into this format.

Once data is copied over to video memory, setting up the vertex attrib pointers can be a little tricky. This is where interleaved arrays become a more challenging than separate attribute buffers. An error here will cause the video card to read garbage memory and possibly segfault in the GPU. This is not a good idea. I’ve seen errors like this bring down entire operating systems and it’s not a pretty picture.

A sheet of paper will help calculate the byte offsets. It’s important to write down the logic and then manually test it using pen and paper. The video driver must never read outside the interleaved array.

Once ready, OpenGL will take care of feeding the vertex data into our shader inputs, where we will interpolate the texture coordinates and successfully sample the bound texture, as shown in the image above.

Now that we have a working foundation where we can develop custom shaders for any Entity in the scene it’s time to start cleaning up resource management. Stay tuned for more!

First Steps for the V3 Renderer

This week, work continued on the new V3 Renderer.

First steps of the V3 renderer.

First steps for the V3 renderer.

In the image above, we can see that the renderer is starting to produce some images.

It might not look like much at the moment and, indeed, there’s still work left to reach parity with the legacy fixed-pipeline renderer. Nonetheless, being able to render the floor grid and the PK Knight model is an important step that validates that the core Entity-Component traversal, the Core OpenGL rendering code, the shaders and the new material and retained mesh systems are interacting correctly.

As with any rendering project where you start from scratch, until the moment where the basic foundation comes together, you have no option but to rely on your code, a piece of paper and a bunch of scaffolding code. Everything has to be built “in the dark”, without being able to see anything on the screen.

Now that we’ve established this foundation, however, we can continue to build the V3 Renderer with visual feedback, which should help tremendously.

Next step: on to basic texture mapping!

Stay tuned for more!

Vortex V3 Renderer

The past couple of weeks have been super busy at work, but I’ve still managed to get the ball rolling with the new renderer for the Vortex Engine.

This is the first time in years I’ve decided to actually write a completely new renderer for Vortex. This new renderer is a complete clean-room implementation of the rendering logic, designed specifically for the new Entity-Component system in the engine.

Current Rendering Systems in Vortex

Let’s start by taking a look at the current rendering systems in the Vortex Engine. Ever since 2011, Vortex has had two rendering systems: a Fixed Pipeline rendering system and a Programmable Pipeline rendering system.

Dual Pipeline support: a Comparison of the Rendering Pipelines available in Vortex Engine. The image on the left represent the Fixed Pipeline. The image on the right represents the Programmable Pipeline.

Dual Pipeline support: a Comparison of the Rendering Pipelines available in Vortex Engine. The image on the left represent the Fixed Pipeline. The image on the right represents the Programmable Pipeline.

Both these rendering systems are pretty robust. Both have been used to develop and launch successful apps in the iOS App Store and they have proven to be reliable and portable, allowing the programmer to target Linux, Windows, Mac OS X and Android, as well as iOS.

The problem with these renderers is that they were designed with Vortex’s Scenegraph-based API in mind. This means that these renderers do not know anything about Entities or Components, but rather, they work at the Node level.

Moving forward, the direction for the Vortex Engine is to provide an Entity Component interface and move away from the Scenegraph-based API. This means that glue code has to be developed to allow the traditional renderers to draw the Entity-Component hierarchy.

So… why is this a problem?

Why a new Renderer?

As Vortex V3 now provides a brand new Entity-Component hierarchy for expressing scenes, glue code had to be developed in order to leverage the legacy renderers in the Vortex Editor. In the beginning this was not a major problem, however, as the Entity-Component system matures, it’s become ever more difficult to maintain compatibility with the legacy renderers.

PBR Materials in Unreal Engine 4. Image from ArtStation's Pinterest.

PBR Materials in Unreal Engine 4. Image from ArtStation’s Pinterest.

Another factor is the incredible pace at which the rendering practice has continued to develop in these past few years. Nowadays, almost all modern mobile devices have support for OpenGL ES 2.0 and even 3.0, and PBR rendering has gone from a distant possibility to a very real technique for mobile devices. Supporting PBR rendering on these legacy renderers would require a significant rewrite of their core logic.

Finally, from a codebase standpoint, both renderers were implemented more than 5 years ago, back when C++11 was just starting to get adopted and compiler support was very limited. This does not mean that the legacy renderers’ codebases are obsolete by any means, but by leveraging modern C++ techniques, they could be cleaned up significantly.

From all of this, it is clear that a new clean-room implementation of the renderer is needed.

Designing a New Renderer

The idea is for the new renderer to be able to work with the Entity-Component hierarchy natively without a translation layer. It should be able to traverse the hierarchy and determine exactly what needs to be rendered for the current frame.

Once the objects to be renderer have been determined, then, a new and much richer material interface would determine exactly how to draw each object according to its properties.

Just like with the Vortex 2.0 renderer, this new renderer should fully support programmable shaders, but through a simplified API that requires less coding and allows drawing much more interesting objects and visual effects.

Choosing a backing API

Choosing a rendering API used to be a simple decision: pick DirectX for Windows-only code or OpenGL (ES) for portable code. The landscape has changed significantly in the past few years, however, and there are now a plethora of APIs we can choose from to implement hardware-accelerated graphics.

This year alone, the Khronos Group released the Vulkan specification, a new API that tackles some of the problems inherent to OpenGL, as seen in the following image.

Comparison of the OpenGL and Vulkan APIs of the Khronos Group. Slide Copyright (C) 2016 Khronos Group.

Comparison of the OpenGL and Vulkan APIs of the Khronos Group. Slide Copyright (C) 2016 Khronos Group.

Now, both Vulkan and Metal are very appealing low-level APIs that provide a fine degree of control over the hardware, but they are limited in the sense that while Metal is Apple specific, Vulkan is cross-platform but not available on Apple devices.

DirectX 12 is Windows 10 only and that discards it right off the bat (for this project at least). DirectX 11 is a good option but, again, Windows only.

This leaves OpenGL and OpenGL ES as the two remaining options. I’ve decided to settle for Core OpenGL 3.3 at this time. I think it’s an API that exposes enough modern concepts to allow implementing a sophisticated renderer while also remaining fully compatible with Windows, iOS and everything in-between.

I don’t discard the possibility in the future of implementing a dedicated Metal or Vulkan backend for Vortex, and nothing in the Engine design should prevent this from happening, however, at this time, we have to start on a platform that’s available everywhere.

Using Core OpenGL 3.3 will also allow reusing the battle-tested shader API in Vortex. This component has several years of service under its belt and I’d risk to say that all of its bugs have been found and fixed.

Other than this particular component, I’m also reusing the material interface (but completely overhauling it) and developing a new RetainedMesh class for better handling mesh data streaming to the GPU.

Closing Thoughts

Writing a comprehensive renderer is no weekend task. A lot of components must be carefully designed and built to fit together. The room for error is minimal, and any problem in any component that touches anything related to the Video Card can potentially make the entire system fail.

It is, at the same time, one of the most satisfying tasks that I can think of as a software engineer. Once you see it come to life, it’s more than a sum of its parts: it’s a platform for rendering incredible dream worlds on a myriad of platforms.

I will take my time developing this new renderer, enjoying the process along the way.

Stay tuned for more! : )

The slow road to persistence: encoding

This week I took on the large task of implementing a serialization scheme that allows saving the user’s scene onto their disk.

A scene containing a textured forktruck model.

A scene containing a textured forktruck model.

The problem at hand can divided in two major tasks:

  1. Serialize the scene by encoding its contents into some format.
  2. Deserialize the scene by decoding the format to recreate the original data.

I’ve chosen JSON as serialization format for representing the scene’s static data. That is, entities and components, along with all their properties and relationships are to be encoded into a JSON document that can then be used to create a perfect clone of the scene.

Why JSON? JSON is a well-known hierarchical file format that provides two good benefits: first, it’s easy for humans to read and hence debug. Second, it provides an uncanny 1:1 mapping to the concept of Entity and Component hierarchies.

It’s worth noting also that JSON tooling is excellent, providing the ability to quickly whip off a Python or Javascript utility that works on the file.

The downside is that reading and writing JSON is perhaps not as efficient as reading a custom binary format, however, I consider scene loading and saving an operation rare enough, most likely to be performed outside of the main game loop, so that it’s a feasible option at this time.

OK, so without further ado, let’s take a look at what a scene might look like once serialized. The following listing presents the JSON encoding of the scene depicted above. It was generated with the new vtx::JsonEncoder class.

{
   "entities" : [
      {
         "children" : [],
         "components" : [
            {
               "type" : 0
            },
            {
               "type" : 1
            }
         ],
         "name" : "2D Grid",
         "transform" : {
            "position" : {
               "x" : 0,
               "y" : -0.5,
               "z" : 0
            },
            "rotationEuler" : {
               "x" : 0,
               "y" : 0,
               "z" : 0
            },
            "scale" : {
               "x" : 1,
               "y" : 1,
               "z" : 1
            }
         }
      },
      {
         "children" : [],
         "components" : [
            {
               "type" : 0
            },
            {
               "type" : 1
            },
            {
               "type" : 100
            }
         ],
         "name" : "forktruck.md2",
         "transform" : {
            "position" : {
               "x" : 0,
               "y" : -0.5,
               "z" : 0
            },
            "rotationEuler" : {
               "x" : 4.7123889923095703,
               "y" : 0,
               "z" : 0
            },
            "scale" : {
               "x" : 0.0099999997764825821,
               "y" : 0.0099999997764825821,
               "z" : 0.0099999997764825821
            }
         }
      }
   ]
}

The document begins with a list of entities. Each entity contains a name, a transform, a list of children and a list of components. Notice how the transform is a composite object on its own, containing position, rotation and scale objects.

Let’s take a look at the encoded forktruck entity. We see its name has been stored, as well as its complete transform. In the future, when we decode this object, we will be able to create the entity automatically and place it exactly where it needs to be.

Now, you may have noticed that components look a little thin. Component serialization is still a work in progress and, at this time, I am only storing their types. As I continue to work on this feature, components will have all their properties stored as well.

On a more personal note, I don’t recall having worked with serialization/deserialization at this scale before. It’s definitely a challenge that’s already proven to be a large yet satisfying task. I am excited at the prospect of being able to save and transfer full scenes, perhaps even over the wire, and to a different device type.

The plan for next week is to continue to work on developing the serialization logic and getting started with the deserialization. Once this task is complete, we will be ready to move on to the next major task: the complete overhaul of the rendering system!

Stay tuned for more!

New Scale Tween Component for the Vortex Engine

This week was pretty packed, but I found some time to write the final addition to the native components for the vertical slice of the Editor. This time around, the new Scale Tween Component joins the Waypoint Tween and Rotation components to provide an efficient way to animate the scale of an entity.

With this new component, Vortex now supports out-of-the-box animation for all basic properties of an entity: position, rotation and scale.

The current lineup of built-in components for Vortex V3 is now as follows:

  • vtx::WaypointTweenComponent: continuously move an entity between 2 or more predefined positions in the 3D world.
  • vtx::RotationComponent: continuously rotate an entity on its Ox, Oy or Oz axis.
  • vtx::ScaleTweenComponent: continuously animate the scale of an entity to expand and contract.

The Scale Tween Component was implemented as a native plugin that taps directly into the Core C++ API of the Vortex Engine. This will allow leveraging the speed of native, optimized C++ code for animating an entity’s scale at a very low overhead.

In the future, once we have implemented scripting support, a script that desires to alter the scale will be able to just add a Scale Tween Component to the affected entity, configure the animation parameters (such as speed, scaling dimensions and animation amplitude) and rely on the Entity-Component System to perform the animation of the transformation automagically.

Of course, Scale Tween components can also be added statically to entities by means of the Vortex Editor UI.

Time permitting, next week we’ll finally be able to move on to entity hierarchy and component persistence. I want to roll this feature out in two phases: one where I first implement the load operation and then, once it’s been proven to be solid, I will then implement saving from the Editor UI.

There’s this and much more coming soon! Stay tuned for more!

Revisiting the Entity View

Work continued on the Vortex Editor this past week. One UI element that has always been somewhat of a functional placeholder and I’ve been meaning to revisit was the Entity View list.

Sitting on the left side of the UI, the Entity View shows all entities in a scene and provides a way to select them. As entities gained the ability to be parented to one another, however, it became apparent this view had to evolve to better express entity relationships.

The new Entity View in the Vortex Editor (shown on the left) displays parenting relationships between entities.

The new Entity View in the Vortex Editor (shown on the left) displays parenting relationships between entities.

A natural solution to this problem is to represent the entities in a hierarchical view (such as a tree). Not only does this convey which entities are parented to which, it allows better organizing the project, as empty entities can now be used to group together other entities that are related to a particular game/simulation mechanic.

The above screenshot shows the new view in action. This is a 3D model loaded from the Wavefront OBJ format, which allows for describing hierarchies of objects. The Vortex Engine’s OBJ loader recognizes these object groupings in the OBJ format and represents each as a separate entity. All of these are then parented to a single entity representing the 3D model in its entirety.

Overall I’m very satisfied with the results. There is still more work to be done, but I think that this is a valuable UX improvement that will go a long way.

Once this work is complete, the plan is to work in the ability to persist the entities and their configuration (including all attached components). I have some ideas on how to bring this idea about, but there are still some interesting problems that will have to be resolved.

Stay tuned for more!

Excellent Intro to Qt Quick

I came across this video that provides a great introduction to the Qt Quick controls in Qt 5.

It’s very interesting to see how a fully fledged, cross platform app that consumes a Web API can be developed in just over 15 minutes almost without a single line of code.

After seeing this video, I’ve been looking a little more into how C++ can be integrated into Qt Quick apps and, unfortunately, it doesn’t seem to leverage the signal-slot mechanism common to QWidgets applications.

This is a problem, since it means that reusing a large codebase might be a little more involved than just doing a seamless transition from writing QWidgets to slowling rolling out side-by-side Qt Quick panels.

Nonetheless, it’s very impressive and it’s definitely worth taking a look at if you need to develop a quick desktop UI in 10~15 minutes.

Implementing a Waypoint System in the Vortex Engine

This week, we’re back to developing new native components for the Vortex Engine. For this week, the objective was to develop a “Waypoint Tween” component that moves an entity’s position between a series of points.

The new Waypoint Tween Component is used to move a 3D model between four points.

The new Waypoint Tween Component is used to move a 3D model between four points.

There are two main aspects to bringing the system to life: the component implementation in the Vortex Engine and the UI implementation in the Vortex Editor.

At the engine level, the system is implemented via a C++ component that is very fast at the time of performing the math necessary to interpolate point positions based on time and speed.

At the editor level, due to the flexibility of this system, exposing its properties actually required a significant amount of UI work. In this first iteration, points can be specified directly in the component properties of the inspector panel. Later in the game, the the plan is to allow the user to specify the points as actual entities in the world and then reference them.

Animation and Movement

Now, in the animated GIF above, it can be seen that the 3D model is not only moving between the specified points, but it also appears as if the model is running between these.

There are two factors at play here to implement this effect: the MD2 Animation and the Waypoint Tween.

The MD2 Animation and Waypoint Tween Components.

The MD2 Animation and Waypoint Tween Components.

When enabled, the Animate Orientation property of the Waypoint Tween component orients the 3D model so that it’s looking towards the direction of the point it’s moving to.

This propery is optional, as there are some cases where this could be undesirable, for instance, imagine wanting to implement a conveyor belt that moves boxes on top of it. It would look weird if boxes magically rotated on their Oy axis. For a character, on the other hand, it makes complete sense that the model be oriented towards the point it’s moving to.

Regarding the run animation, if you have been following our series on the Vortex Editor, you will remember that when instantiated by the engine, MD2 Models automatically include an MD2 Animation Component that handles everything related to animating the entity.

More details can be found in the post where we detail how MD2 support is implemented, but the idea is that we set the animation to looping “run”.

When we put it all together, we get an MD2 model that is playing a run animation as it patrols between different waypoints in the 3D world.

Waypoint System in Practice

So how can the waypoint system be used in practice? I envision two uses for the waypoint system.

The first one is for environment building. Under this scenario, the component system is used to animate objects in the background of the scene. Case in point, the conveyor belt system described above.

The second use, which might be a little more involved, would be to offload work from scripts. The efficient C++ implementation of the waypoint system would allow a component developed in a scripting language to have an entity move between different points without having to do the math calculations itself.

The dynamic nature of the component would allow this script to add and remove points, as well as interrupting the system at any time to perform other tasks. An example would be a monster that uses the waypoint system to patrol an area of the scene and then, when it’s detected that a player is close to the monster, the system is interrupted and a different system takes over, perhaps to attack the player.

In closing

I had a lot of fun implementing this system, as it brings a lot of options to the table in terms of visually building animated worlds for the Vortex Engine.

The plan for next week is to continue working on the Editor. There is some technical debt on the UI I want to address in order to improve the experience and there are also a couple of extra components I want to implement before moving on to other tasks.

As usual, stay tuned for more!

OpenGL from a 10,000ft view

This month marks 10 years since I started learning and using OpenGL, and what a ride has it been! I started off with basic OpenGL 1.1 back in the University under the advisory of my mentor and ex-Googler Gabriel Gambetta, then moving on to the programmable pipeline by teaching myself how to code shaders and then riding the wave of the mobile revolution with OpenGL ES on the iPhone.

OpenGL Logo. Copyright (C) Khronos Group.

OpenGL Logo. Copyright (C) Khronos Group.

As part of this process, I’ve also had the privilege of teaching OpenGL to others at one of the most important private universities back home. This exposed me to learn evermore about the API and improve my skills.

Rather than doing a retrospective post to commemorate the date, I though about doing something different. In this post I’m going to explain how OpenGL works from a 10,000ft view. I will lay the main concepts of how vertex and triangle data gets converted into pixels on the screen and, in the process, explain how a video card works. Let’s get started!

What is OpenGL

At the most basic level, OpenGL can be seen as a C API that allows a program to talk to the video driver and request operations or commands to be performed on the system’s video card.

Titan X GPU by NVIDIA. Image courtesy of TechPowerup.com

Titan X GPU by NVIDIA. Image courtesy of TechPowerup.com

So what is a video card? A video card (or GPU) is a special-purpose parallel computer, excellent at executing a list of instructions on multiple data at the same time. A video card has its own processors, its own memory and it’s good at performing one particular set of operations (namely, linear algebra) very very fast.

What OpenGL gives us is access to this device through a client/server metaphor where our program is the client that “uploads” commands and data to the video driver. The video driver, which is the server in this metaphor, buffers this data and, when the time is right, it will send it to the video card to execute it.

Our program’s “instance” inside the video driver is known as the OpenGL Context. The Context is our program’s counterpart in the video card and it holds all the data we’ve uploaded (including compiled shader programs) as well as a large set of global variables that control the graphics pipeline configuration. These variables comprise the OpenGL State and they’re the reason OpenGL is usually seen as a State Machine.

The Graphics Pipeline (Simplified)

Remember how I mentioned that the video card excels at performing a limited set of operations very very fast? Well, what are these operations?

The way the video card works is that data is fed into it through OpenGL and then it goes through a series of fixed steps that operate on it to generate, ultimately, a series of pixels. It is the job of the video card to determine which pixels should be painted and using which color. And that’s really all the video card does at the end of the day: paint pixels with colors.

The following image, taken from Wikipedia, shows a simplified view of the data processing pipeline that OpenGL defines.

A simplified view of the OpenGL pipeline. Source: Wikipedia

A simplified view of the OpenGL pipeline. Source: Wikipedia

In this image, imagine the data coming from the user program. This diagram shows what is happening to this data inside the video card.

Here:

  1. Per-Vertex Operations: are operations that are applied to the vertex data supplied. This data can be coming in hot from main system memory or be already stored in video card memory. Here is where vertices are transformed from the format they were originally specified in into something we can draw on the screen. The most common scenario here is to take a piece of geometry from Object Space (the coordinate system the artist used), place it in front of a virtual “camera”, apply a perspective projection and adjust its coordinates. In the old days, here is where all transformation and lighting would take place. Nowadays, when a shader program is active, this stage is implemented by the user’s Vertex Shader.
  2. Primitive Assembly: here’s where OpenGL actually assembles whatever vertex data we supplied into its basic primitives (Triangle, Triangle Strip, Triangle Fan, Lines or Points, among others). This is important for the next step.
  3. Rasterization: is the process of taking a primitive and generating discrete pixels. It amounts to, given a shape, determining what pixels said shape covers. If the conditions are right, texture memory can be sampled here to speed up the texturing process.
  4. Per-Fragment Operations: are operations performed on would-be pixels. If there is a shader program active, this is implemented by the user in the fragment shader. Texture mapping operations take place here, as well as (usually) shading and any other operations that the user can control. After this stage, a number of operations take place based on the State Machine. These operations include depth testing, alpha testing and blend operations.
  5. Framebuffer: finally, this is the image we are rendering our pixels to. It is normally the screen, but we can also define a texture or a Render Target object that we could then sample to implement more complex effects. Shadow Mapping is a great example of this.

Sample OpenGL Program

Having taken a (very quick) look at OpenGL, let’s see what a simple OpenGL program might look like.

We are going to draw a colored triangle on the screen using a very simple script that shows the basic interaction between our program and the video card through OpenGL.

A colored triangle drawn by a simple program exercising the OpenGL API.

A colored triangle drawn by a simple program exercising the OpenGL API.

I’m using Python because I find that its super simple syntax helps put the focus on OpenGL. OpenGL is a C API however and, in production code, when working with OpenGL, we tend to use C or C++. There are other bindings available for Java and C# as well, but -mind you- these just marshal the calls into C and invoke the API directly.

This script can be divided in roughly 3 parts: initializing the window and OpenGL context, declaring our data to feed to the video card and a simple event loop. Don’t worry, I’ll break it down in the next section.

#!/opt/local/bin/python2.6
import pygame
from OpenGL.GL import *

def main():
	# Boilerplate code to get a window with a valid
	# OpenGL Context
	w, h = 600, 600
	pygame.init()
	pygame.display.set_caption("Simple OpenGL Example")
	scr = pygame.display.set_mode((w,h), pygame.OPENGL|pygame.DOUBLEBUF)
	
	glClearColor(0.2, 0.2, 0.2, 0.0)

	# Data that we are going to feed to the video card:
	vertices = [ \
		-1.0, -1.0, 0.0, \
		1.0, -1.0, 0.0,  \
		0.0, 1.0, 0.0 ]

	colors = [ \
		1.0, 0.0, 0.0, 1.0, \
		0.0, 1.0, 0.0, 1.0, \
		0.0, 0.0, 1.0, 1.0 ]
			

	# Here's the game loop, all our program does is
	# draw to a buffer, then show that buffer to the
	# user and read her input.
	done = False
	while not done:
		# Clear the framebuffer
		glClear(GL_COLOR_BUFFER_BIT)
		
		# Supply the video driver a pointer to our
		# data to be drawn:

		glVertexPointer(3, GL_FLOAT, 0, vertices)
		glEnableClientState(GL_VERTEX_ARRAY)

		glColorPointer(4, GL_FLOAT, 0, colors)
		glEnableClientState(GL_COLOR_ARRAY)

		# Now that all data has been set, we tell
		# OpenGL to draw it, and which primitive
		# our data describes. This will be used
		# at the primitive assembly stage.
		glDrawArrays(GL_TRIANGLES, 0, 3)

		# Clean up
		glDisableClientState(GL_COLOR_ARRAY)
		glDisableClientState(GL_VERTEX_ARRAY)
		
		# Show the framebuffer
		pygame.display.flip()

		# Process input:
		for evt in pygame.event.get():
			if evt.type == pygame.QUIT:
				done = True

if __name__ == "__main__":
	main()

If you’re familiar with OpenGL, you’ll notice I’m using mostly OpenGL 1.1 here. I find it’s a simple way to show the basic idea of how data is fed into the video card. Production-grade OpenGL will no doubt prefer to buffer data on the GPU and leverage shaders and other advanced rendering techniques to efficiently render a scene composed of thousands of triangles.

Also note that the data is in Python list objects and, therefore, the pyopengl biding is doing a lot of work behind the scenes here to convert it into the float arrays we need to supply to the video card.

In production code we would never do this, however, doing anything more efficient would require to start fiddling with pointer syntax that would undoubtedly make the code harder to read.

Putting it all together

Now, if you’re unfamiliar with OpenGL code, let’s see how our program is handled by the Graphics Pipeline.

		# Supply the video driver a pointer to our
		# data to be drawn:

		glVertexPointer(3, GL_FLOAT, 0, vertices)
		glEnableClientState(GL_VERTEX_ARRAY)

		glColorPointer(4, GL_FLOAT, 0, colors)
		glEnableClientState(GL_COLOR_ARRAY)

We start off by providing an array of vertices and colors to OpenGL, as well as a description of how this data is to be interpreted. Our calls to glVertexPointer and glColorPointer (in real life you would use glVertexAttribPointer instead) tells OpenGL how our numbers are to be interpreted. In the case of the vertex array, we say that each vertex is composed by 3 floats.

glEnableClientState is a function that tells OpenGL that it’s safe to read from the supplied array at the time of drawing.

		# Now that all data has been set, we tell
		# OpenGL to draw it, and which primitive
		# our data describes. This will be used
		# at the primitive assembly stage.
		glDrawArrays(GL_TRIANGLES, 0, 3)

glDrawArrays is the actual function that tells OpenGL to draw, and what to draw. In this case, we are telling it to draw triangles out of the data we’ve supplied.

After this call, vertex data will go through the per-vertex operations stage and then be handed off to the primitive assembly, which will effectively interpret the vertices as forming part of one (or more) triangles.

Next, the rasterization stage will determine which pixels on the framebuffer would be covered by our triangle and emit these pixels, which will then go to the per-fragment operations stage. The rasterization stage is also responsible for interpolating vertex data over the triangle, this is why we get a color degrade effect spanning the area of the triangle – it’s simply the interpolation of the colors at the three vertices.

This is all happening inside the video card in parallel to our event loop, that’s why we have no source code here to show.

		# Show the framebuffer
		pygame.display.flip()

Finally, after everything is said and done, the video card writes the resulting pixels on the framebuffer, and we then make it visible to the user by flipping the buffers.

In Closing and Future Thoughts

We’ve barely scratched the surface of what OpenGL is and can do. OpenGL is a big API that has been around for 20+ years and has been adding lots of new features as video card and video games companies continue to push for ever more realistic graphics.

Now, while 20+ years of backwards compatibility allow running old code almost unmodified on modern systems, design decisions accumulated over time tend to obscure the optimal path to performance, as well as to impose restrictions on applications that would benefit for more direct control of the video card.

Vulkan logo. (tm) Khronos Group.

Vulkan logo. ™ Khronos Group.

These points, made by the Khronos group itself, have led to the design and development of a new graphics API standard called Vulkan. Vulkan is a break from the past that provides a slimmed down API more suitable for modern day hardware. In particular multi-threaded and mobile applications.

OpenGL, however, is not going away any time soon, and the plan for the Khronos group, at least for the time being, appears to be to offer both APIs side by side and let the developers choose the one more suitable to their problem at hand.

Additionally, with Apple focusing on Metal and Microsoft on DX12, OpenGL (in particular OpenGL ES 2.0) remains the only truly cross-platform API that can target almost every relevant device on the planet, be it an iPhone, an Android phone, a Windows PC, GNU/Linux or Mac.

Finally, the large body of knowledge surrounding 20+ years of OpenGL being around, coupled with OpenGL’s relative “simplicity” when compared to a lower-level API such as Vulkan, may make it a more interesting candidate for students learning their first hardware-accelerated 3D API.

As time marches on, OpenGL remains a strong contender, capable of pushing anything from AAA games (like Doom) to modern-day mobile 3D graphics and everything in-between. It is an API that has stood the test of time, and will continue to do so for many years to come.

Building a Vortex Engine scene using Vortex Editor

This week I’ve been working in different several UX enhancements for the Editor to improve the scene building experience in general.

Building a scene for the Vortex Engine using the Vortex Editor.

Building a scene for the Vortex Engine using the Vortex Editor.

One thing to notice in the above image is the new horizontal grid on the floor. Each cell in the floor is exactly 1×1 world unit in size and it will help tremendously to keep a sense of scale and distance between objects when building the scene.

What I like about this image is how it shows the different components of the Editor coming together to make the Vortex Engine easier to use than ever. The more complete the Editor is, the quicker we can move on to start implementing the new rendering backend for Vortex, codename “V3”.

Stay tuned for more weekly updates!