These last few days I’ve been improving the Vortex-Engine support for Md2 Model files.
Vortex Instancing Test
Md2 models can now be “instanced” from the same data at the Engine level (not to be confused with GPU instancing) and different animations can be assigned to each model. The idea for the short-term is to be able to provide entities the ability to create a Md2 model instance and animate it however they like without causing conflicts with other entities that instantiate the same model.
There’s a video I created, where you can see the test, right after the jump. Continue reading →
In my last article, I provided some overall details on how to load Quake 2 model files (also known as md2 files) and described how these models would store its animations in their so called keyframes.
By the end of the post I discussed the fact that (as you could see on video) the model animation looks rather “choppy”. In this post I will explain what we can do to improve the animation rendering process for models animated through keyframes.
What we are going to do in order to smooth the animation is, instead of just rendering the keyframe data directly, take advantage of a factor that we haven’t completely explored yet: time.
If by the end of the last post you started working on rendering md2 models, chances are so far you have some sort of time control that allows you to determine which keyframe to render.
The rendering problem that we face is characterized by the fact that sometimes, for a given elapsed time, we determine that the actual keyframe we have to draw lays between two keyframes. Let’s call these the nth and nth+1 keyframes.
In order to draw the mode to the screen, we face two choices: select the nth keyframe or the nth+1 keyframe. What’s important to know here is that in either case our choice will be wrong. This is because the actual data that we need does not exist, and so, it’s impossible to render the model correctly. Forcing the choice to the nth or n+1 keyframe is what makes our model animation look choppy.
Not everything is lost though. Even if we don’t have the data we need, we can infer it.
Should our needed keyframe actually exist, we know it would lay between two other keyframes. Now, as we have some sort of time control mechanism, we can determine which ones these two actually are. Under these conditions, something we can do is determine the “distance” from the current time mark to each keyframe’s.
Using this information, we can guess the keyframe that’s supposed to lay in between by interpolating both keyframes in time. The idea is to perform a convex sum. The following C++ pseudocode depicts the concept:
for (size_t i = 0; i < _numvertices; i++)
guess[i] = _keyframes[n] * (1-t) +
t * _keyframes[n+1];
Here, guess holds our guessed keyframe, Vertex is a struct composed of three floats and t is a parameter dependent on the elapsed time that has been adjusted to the [0,1] interval. When t=0, it means the keyframe to render is actually the nth keyframe. Conversely, t=1 indicates the keyframe to render is nth+1. Finally, notice that when t=0.5, the guessed frame will be halfway between keyframes n and n+1.
The following video shows the improved rendering process in action:
Notice how smooth the animation actually looks. This video was generated by running the animation at just 2 frames per second.
Interpolating between keyframes is certainly possible and, in our conducted tests, it really helped smooth the animation process.
This added visual quality does come at a cost, though. Every time we have to animate the model, we will have to iterate over the keyframe array, interpolating the values corresponding to two consecutive keyframes. The additional CPU overhead is well worth it nevertheless, dramatically improving the quality of our animation.
Many factors have been left out of this post. In particular, time control, elapsed time calculation and how to determine which keyframe should be rendered at a given time were all omitted. I believe these features are dependent on the kind of application being written and different formulae will or will not work in different situations. The best thing to do is to design a model that works for your application and produce a simple way to obtain the data needed to apply these concepts.
By mid June 2010 I worked on a personal project that consisted in rendering models from the classic Quake 2 game by Id Software. Here’s a video of the renderer.
Quake 2 was one of my favorite games when I was young and researching how to load and render its 3D models instantly sent me down the nostalgia path.
For those who might be interested in developing their own loader, Quake 2 models are very easy to read and parse from languages such as C and C++. This is mostly because .md2 files consist in a binary file format that we can easily read into C structs.
David Henry does a great job at explaining how the bytes are stored internally, so I’m not going to reproduce his work here. Nonetheless, here’s the md2 file header:
Quake 2 model files organize vertices and triangles into keyframes: a series of poses that enable us to draw the model animated. The texture to use when rendering the model is referenced by the file too. Each file might point to zero or more textures in the PCX format. Textures are called the model’s “skin” in Quake 2’s terminology.
The video above were generated using the custom renderer that I wrote. It is using OpenGL and GLUT to create the window and draw the model.
Now, you’ve probably noticed that the animation in the video above looks rather “choppy”. It’s definitely not smooth. The reason is that keyframes provide a base for animating the model, but not enough frames are provided as to produce a “continuous” animation.
In my next post I’m going to show you how we can improve this situation. Stay tuned!
For the past month and a half I’ve been working on and off on a project that consists in the development of a custom Quake 2 BSP map renderer. The renderer was written from scratch using nothing but C++ and OpenGL 2.0.
Click on the image for more screenshots.
The project’s objective was twofold: to help develop the foundation for a custom object-oriented rendering engine and to test my skills at what seemed as such a daunting task as loading, rendering and navigating Quake 2 maps in realtime.
There is a lot of information on the Internet regarding Id Software’s BSP file format and the way polygons, texture data, lightmaps and any other aspects of the game are stored. In my experience, no website provides “the whole picture”, so if you’re planning on rolling your own renderer, be prepared to spend as much time doing research online and developing and testing ideas as writing code.
At its current status, I consider the renderer to be in a pretty good state, where it supports loading maps directly from Quake 2’s original PAK files, rendering its polygons, performing texture mapping, applying lightmaps, adding a skybox and leveraging the map’s visibility information to prevent large amounts of non-visible geometry from being sent to the Video Card. I’m glad I’ve been able to approximate the game’s look and feel in what I consider very close to the original.
There are still many open points that could be addressed, such as experimenting with blur effects to increase the sky’s realism and bring life into the skybox. All texture mapping and lighting is currently done using shaders, so the basic foundation for post production visual effects is laid and ready to be built upon.
At some point, I might provide executable versions of the renderer so you can try it yourself. In the mean time, if you’d like to view other rendered images, I’ve added more screenshots from this project at the Projects page of this site.
Here’s also a YouTube video showing an actual run of the Renderer:
Quake 2 and all of its art is Copyright (C) Id Software. Thanks to Id for creating such a great game.