Let's have fun together. I'll create things and you observe me.

This is a blog detailing all the projects I work on. It's a record of where things are at and a pin board of small random bits and pieces of creation.

I share anything useful I've come across during development, so if you're trying to solve a similar problem checking the labels on the right may be of assistance.

Feel free to leave a comment. Also, please take note that 90% of these blogs are compiled at 3 in the morning. The true hour of the day.

Enjoy your stay.


Friday, 27 June 2008

Final Battle!

Exams are done and dusted! It's similarly epic to defeating the final boss in a game. So far I have gotten back my mark for Software Engineering; ~80%, so a D. Quite chuffed. So, besides basking in many games of Team Fortress 2, I've been noodling around ideas for what I should focus on next with Synergy.

I did some light research into physics engine libraries that are out there, and the selection is extensive. I was impressed. At this point the prime candidate looks to be Bullet. Failing that, perhaps ODE. I made an attempt to compile some of the many MSVC++6 projects that come with Bullet's SDK, some worked, some didn't. There's no binary currently in existence for the library, so I don't know exactly how to proceed. Investigate the forums, I suppose...

I decided to leave that kind of expedition to a later date, primarily because I had a headache and just didn't feel like I had the energy for it. Instead I decided to perform some optimisation on the 3DS loader function, which is still employing the crutch of the brute force algorithm to calculate smoothed normals, resulting in something close to 5 minutes stall to load in a 50000 polygon, single smooth group (worst case problem) model.

After chewing over the problem for a little while, I sketched out a rough algorithm that would run in O(fv) time, where f = number of faces and v = number of vertices. After implementing it, I found the algorithm `almost' works. Satisfyingly, however, it took about 2 seconds to load in that same 50000 poly model! A very significant speed increase. Conceptually, what the algorithm `should' be doing is exactly the same as the brute force algorithm, but somewhere in the implementation there is a deviation which I have yet to isolate.

I thought the incorrectness may have stemmed from the fact that 3DS files export duplicate vertices for the same coordinate in space. To remove this potential threat to the algorithm, I set about implementing a welding algorithm that would merge duplicate vertices into the one vertex list entry, thus compressing the total number of vertices in the mesh.

This algorithm posed some very interesting challenges. The root challenge is that it could not be less efficient than the proposed smoothing algorithm (O(fv)). From this root challenge spawns many other challenges as a result. The algorithm I sketched out is quite straight-forward, but the implementation proved to be a little trickier. The main issue was `how do you compare vertices?'

I divined the answer to this question whilst going to the toilet (details, details...). Essentially the order you compare the components of a vertex depends upon the order you sort them in, which seems obvious upon reflection. With a newly crafted comparison function, I had myself a linear-time welding function.

However, this welding function did not fix the smoothing algorithm. I've decided to leave further debugging until I've had a night's sleep, nevertheless progress is promising!

EDIT (14:14, 27/06/2008): The algorithm is working! I was missing a test for the final pass on indices to assign new normal values...I wasn't testing if the face was part of the smoothing group! How obvious. Amazing what some sleep can do. The code for the linear-time normal smoothing is pretty decent.

I've also had a look back over the mesh rendering function, and split the function depending on which shade model is enabled, meaning that rendering in Flat mode will use the face normal, rather than the normal of the first index in the face, which gives incorrect results.

I'm very happy that I managed this speed increase! Now I can consider physics integration with a clear state of mind.

Wednesday, 18 June 2008

Give Me Some Skin!

After suffering an 08:30 exam and the apocalyptic effects it has on my sleeping patterns, I felt I might loosen the screws in my mind by taking another look at implementing texture mapping in Synergy.

My suspicions lay in the library I was using to load texture data into OpenGL. To test this, I downloaded and integrated SOIL. So it seemed I could load in textures, but they were not mapped at all to the meshes.

Which made me realise I had left the texture mapping aspect of the SynMesh class, well, hanging. So going back over that was a simple affair, though I had to make an educated guess as to how the .3DS file's texture mapping data is actually organised in respect to polygons. It is, of course, the obvious answer, which is for each vertex in the vertex list, the texture vertex at the same relative position in the texture vertex list is that vertex's texture coordinate.

A little tweaking of SynMesh.render(), and various other spots in the demo code, and I was in business:

Bless my flesh!

Note, the skin for the tree actually isn't finished. So that's a data thing, not a processing error. Very satisfying, after the endless battle with AllegroGL just to do basic texture mapping. Tsk tsk. Here's a lit screenshot, just for fun:

Trees of Bliss

Sunday, 15 June 2008

Criminally Smooth

:). I am pleased. It was a long, hard battle, with much contemplation, testing, probing and trial and error, but Synergy now sports Smoothing Group functionality!

Loading in the smoothing group info was a small mission in itself, one which began with trying to find info on how to get it out of a .3DS file. Implementing this actually spurred a revolution in my coding for being very specific about my longs (`32 or 64 bit?'), which seems to have somehow increased the frame rate of Synergy's various demos by up to 6Hz?!

Building upon the code from that tutorial I linked to in the previous post, the code for reading smoothing groups is about as simple as it gets.

I pondered for most of the day on how the process would actually work, sewing threads of ideas through the problem in my head, considering different approaches. What I finally ended up with was a brute-force algorithm with slight optimisations.

The road to reach this code was treacherous, though, with some interesting results along the way:

A World With 1 Normal

This problem was isolated and corrected relatively quickly, basically I didn't have the indices' blank normals initialised to be blank normals. So fixing that brought me to this:

Revenge Of The Cubes

Great. So some things looking lovely, others looking completely screwed. This proved to be part of a bigger problem, it also created seams:

Almost Anatomically Correct!

And generally just lead to strange lighting behaviour on flat surfaces:

Colt 1911s Never Looked So... Jaded

Although for the most part that Colt 1911 model (made by Dylan) looks very very sexy, one can quickly pick out areas which are rendered incorrectly.

I found the problem to be that the first index in a smoothing group wasn't included in part of the normal calculation! So, fixing that Synergy could now do the Colts justice:

Quantum Sexiness

Not to mention the skulls!

Mission: Accomplished

So, what's next? Study! It's time to stop getting sucked into my projects and turn my attention towards trying to get HDs for my end of semester exams. So, until next week I guess Synergy will have to wait.

Saturday, 14 June 2008

Meshin' Accomplished

Yeah I know, I'm the lamest person on Earth for using that title. But I don't care, I've succeeded too much today to feel anything but empirically chuffed. Despite cleaning up baby vomit at my Day Job, my time was devoted wholly to Synergy.

After working on the SynMesh and SynModel classes last night, I was keen to get a function running that could load in mesh data from some 3D model file format. After some research I eliminated the majority of options, including all formats without bone/keyframe animation, and all formats which aren't widely accessible. That is, not supported by a wide range of modelling programs. That left me with 3D Studio Max's .3DS format. Big surprise there.

Like all great things my quest for this functionality began with a google search, which lead me to a very useful tutorial. After some copying, pasting, and butchering of this code into a form suitable for my code, I began to, with breath guarded, integrate it into the cubetopia demo, replacing my hand-coded geometry information with SynMeshes loaded from a .3DS geosphere.

Some small tweaks here and there and BAM! Meshes loaded:

Trippin' Balls

I suppose it can't really be called `cubetopia' anymore. Perhaps gala would be more fitting. A number of issues were outstanding at this point. The foremost was that there were no normals! So I set about calculating normals upon loading each face. That lead to some interesting mistakes:

Really Trippin' Balls

After becoming friendlier with the concept of vector normalisation, I got this:

Find the Lightsource

After loading in a cube mesh, I discovered the reason for this bizarre lighting was that the meshes were inside out! So, setting glFrontFace(GL_CCW) I finally achieved a well rounded result:

Lo! 'Tis a Gala Night

Feeling very satisfied with myself, I was eager to push the code and see just how far I'd actually gotten it to work. Next was a model of a tree that I was working on once upon a time, and to my delight it worked straight away, giving a pretty funky result:


I began to notice a significant delay in loading the 3DS file, so I decided to see how much this delay scaled with input size. I needed a high poly model... and then I remembered a skull I was sculpting in zbrush once upon a time. This lead to a huge delay of almost 2 minutes to load the file. It was obvious where the hangup was: the obscene amount of debug info I was printing to the console and the log file. I commented those out and the delay was vanquished, which lead me to the final product for the evening:

Synergy's High Poly Flat Shading v0.00001

Each skull is ~50k polygons, so theres 400k polys being rendered there. It is both a milestone and a map of the steps yet to take. The most obvious next step is to implement Smoothing Groups. For instance, here is the exact same model rendered in Max with smoothing groups on:

The Next Mission

Big difference. So that's about it; excellent progress for 5 hours work, in my opinion. I hope to make the same leaps and bounds tomorrow, but we'll see...


Working spasmodically over the last couple of days. Felt like I needed to back off a little bit, since my mind was beginning to race constantly, and it was affecting my sleeping patterns, which in turn affected my productivity. So work performed must be measured to ensure optimal output.

Chilled out now though, thanks to good old Elliott. The relaxed mind state has assisted in persistence and patience in moments of slow progress, such as being unable to crack the nut of OpenGL texture mapping for Synergy. I've gone over every line of code a number of times, and can find no deviation from the example code I have been observing, yet my code does not work.

My only conclusion is that it is some combination of operations I am doing, which on the their own are valid, but in sequence lead to failure of texture mapping. I believe it has something to do with the way I have set up materials in combination with the parameters I have provided AllegroGL to load the bitmaps with.

After spending a couple of hours here and there each day trying to crack it, I set it aside and turned my time towards more fruitful endeavours: models and animation. Progress has been going well with these, and I have the SynMesh and SynModel classes laid out, with a little bit of body to them. I'll be sinking my teeth into the gut of them sometime soon. I have a very clear idea in my head of how the entire Skeletal Animation Method works; a depth-first traversed tree of matrices which are pushed onto a matrix stack similar to OpenGL's method of rendering.
I've also been talking with Dylan about a test bed for Synergy. That is, a game built with it to test and showcase its features. Those are key goals for the game, but we will certainly not ignore the most important goal: to make an insanely fun, insanely cool game we (and others) can play.

The concept we decided on was made unanimously, silent and in parallel with each other. We basically said at the same time `It should be Galileo Complex!'

`What's that?' you ask. Well how do I not mutate this into a huge story... basically it's a game concept we came up with a couple of years ago, that we never really had the resources and faculties to do at the time, but was always going to be a cool idea. It's a top-down action game that is a blend between death match and RTS. You control a commander who leads a group of AI-controlled players, and you must complete objectives in order to progress through the current level you are on. Objectives change depending on what other commanders have done, and matches can be a back-and-forth tug of war over areas of the level and its various objectives and resources.

I won't really go into more detail, but that's the core of it. We are inspired by the elegance of Quake's Deathmatch (Quake 1 that is), and Total Annihilation's RTS combat and interface. So our feature list pools alot of things we observe these games exhibiting; a key one being the game should be extendible. That is, people should be able to mod is easily and in a fun way. There's lots of exciting ideas we have, and I'm itching to get the Synergy Engine up to a point where we can start actually sinking our teeth into this thing.

Originally I thought the game could be an orthographic, fixed camera angle viewport. That is, essentially a 2D game. We're thinking of going with perspective rendering now, but an interesting design idea I had for the 2D thing was character-sprites that had Normal Maps, thus allowing for real-time lighting to be calculated for them, giving them a very 3D appearance without having to rasterise polygons.

While waiting for my tea to brew, I paced the living room and devised a rough algorithm for calculating the normal-mapped sprite from a 3D model.

I haven't really checked it for correctness or efficiency (should be roughly O(xy)), but it feels pretty solid. Despite the fact we're probably going with perspective projection for Galileo, I'll still implement this feature in Synergy. We'll probably end up using it in a number of places in the game anyway; it's too cool not to fiddle around with.

Now, on to Zanath. Figured out some exciting things regarding terrain creation for this. It came about when I finally got around to exporting the terrain model I've been working on to Ogre3D's .SCENE format. Basically, when you export, you get a bunch of easily-editable text files that you can do alot of tweaking with. After checking out some stuff in various forums, I believe I can get texture-splatting happening for Zanath!

This has got me excited, since it means best performance, great detail and easy asset creation/editting. I still need to do alot of experimenting and such to figure out exactly how to do it, but I feel confident that it is entirely within reach.

Until I do actually figure it out, I have to export things using megatexture-like methods. But the result is pleasingly accurate:

That's about it I guess.

Tuesday, 10 June 2008


Here we are. This is my blog. This is where I write particular thoughts associated with moments in time, thus logging things I think and do.

But that's obvious. Sorry; writing as I think. *Ahem*

Let's be specific. This is a place where I log things I am developing/working on. That is, projects of mine. What sort of projects do I work on? Well, there's many sorts, actually. I'm either a renaissance man, or a jack of all trades; further data is necessary to decide just how pretentious I sound right about now.

Music, coding, illustration, 3d modelling, writing, designing. That's the basic summary. These are primarily geared towards the overall domain of `computer game development', but not necessarily.

Well, that's the Cliff's Notes, let's cease the blathering and begin the blog in earnest. I will begin by outlining the primary collection of my current projects:
  • Synergy Engine

    This is a game engine I have been coding in C++ using the Allegro, AllegroGL (weaning off of this) and OpenGL libraries. It's very much in its infant stages currently. I called it the Synergy Engine primarily because that is how it began; simply a combination of many different fragments of code and classes that I had previously written, formed into a working system. Thus, the engine was devised under the concept of attaining a `power' that was more than the sum of its pieces, but rather from the synergy of their cooperation.

    It originally was scoped to be primarily 2D in its functionality, but curiosity and to a lesser extent boredom bested me, and I began to focus principally on 3D rendering. So far it has been very fun, satisfying, educational and fruitful. What I have currently is 1024 lit cubes floating around in space at 40Hz, mouse-look and WASD controls for transforming the camera, console commands for controlling variables, error logging and some basic resource caching. Should be cross-platform compatible (though I definitely wouldn't stake my life on that). Not bad for someone who only vaguely knew the rendering pipeline conceptually before commencing the project.

  • Ultragraph

    Fantastic name, I know. How do I begin describing this project without sounding like the hugest nerd in the world. Wait, we crossed that threshold long before now, so I'll just go all-out:

    Ultragraph is a program that allows a user to model mathematical graphs in 3D, assigning custom data to vertices and edges, and performing a number of queries and operations on graphs such as topological sorting, traversing, spanning tree, circuit finding, path finding, travelling sales person, etc. I want to add a number of other tools, such as support for layers, hiding/showing of groups of vertices and operation/state history. I'm sure there's things I've left out.

    The main purpose of the program is to allow for something I call Knowledge Management. Basically, we can use graphs with state histories and layers to model the `knowledge' of some entity, such as a character. Vertices contain data which may be as simple as a text string pertaining to some proposition that is held to be true by the entity's knowledge, such as `All men are mortal'.

    Edges can represent a lot of different things, but principally I think they are best used for modelling causality between propositions. Perhaps logical dependency (that is if there is an edge (A, B), then B depends upon A being true. That is, if A is false, B must also be false. We could also use weighted edges to model probabilistic inferences.

    `Okay,' you might be saying, `WHY would you want that?' Well, turns out, I think it's an extremely useful tool for role playing. Yes, I mean role playing as in sitting at a table with other people, and dice, and character sheets and `The pungent stench of mildew emanates from the wet dungeon walls,' style conversation. Well not exactly. More so the internet forum mutation of role playing; the non-real-time stu

    If you are playing a highly intelligent, analytical character, having a mathematical model of all of their knowledge accumulated over the entire coarse of the role play, that you can query and use for inference, is something which will really cut down how long it takes you to make a post, and it will make your character behaviour a lot cooler, because you'll truly be playing a character that never lets a fact slip (unless you, the player, play that they slipped it).

    So it gives you much more control. Anyway, there's many many other applications for such a program besides role playing; graphs are hugely ap
    plicable to all sorts of things. Anyway, that's Ultragraph.

  • Zanath

    This is an 3D Action-RPG in the vein of Zelda:OOT. I know what you're thinking, `RPG = dead project walking'. Well, it's not really my project, I'm just on board for it, so that doesn't really bother me; if it survives, that's great; if it doesn't, I'll live.

    I'm currently doing concept art for environments and modelling some basic terrain for engine testing purposes. As well as dropping my thoughts on game play and story
    considerations pretty much every chance I get, which is quite often.

  • Music Project (Cable Sack, Obscura Child)

    Title undecided at this point. Collaboration with my best friend and other associates. We have a solid concept that we are constructing and extrapolating from, and some interesting ideas on possible working process paradigms. Not much more can be said at this point, it's very young.
Today was reasonably progressive; I did an Edit for someone I know from Pixel Joint, who's doing a Runescape sig; here's how that shaped up:



Turned him into a monster! Satisfying, since I haven't really touched any digital art since the start of the semester (14 weeks ago, and I wasn't exactly in-form even back then, anyway). Felt good to get in on some illustration and actually do something that didn't frustrate me.

Zanath work comprised mainly of trying to figure out how the HELL to make a low-poly tree model that actually looks half decent. It's incredibly frustrating that very little information on this seems to exist on the internet (seriously, is this innate knowledge to our species that I'm just missing?). Here's some renders of where I'm at with it:

Trees of Torment

Let's ignore that the trees all have the same z-rotation. But you can see how glaring the foliage planes are. The trunks are simple, since I've been pouring my attention into the damned foliage. I feel like I am getting closer, though. If I can't break this block soon, I'll set it aside and turn my attention to things I know I can do, like rocks, walls, houses, turrets, etc. Basically anything that isn't a plant.

Also, you can see the glaring need for the terrain to have either a giant texture (megatexture I call it, though it has nothing to do with the id tech, that's just what I call that method), or texture splatting of some kind (multi-channel blending). Shadow decals or light mapping aren't possible, since part of the design spec is the game must have day/night cycles, so that ball is really in the coder's court.

Synergy saw some love! I had a second attempt at moving the geometry definition to Display Lists; no performance change (actually it almost went down 1Hz!). However, after I played around with the code execution sequence, moved some things around, simplified, I managed to turn that -1Hz into +10Hz! So I'm sitting on 40Hz now, which is satisfying. Curious what it looks like? Behold the cubes!

Synergy - Cubetopia

Read over a lot of material about Shadow Volume methods for calculating real-time shadows. Looks fun! I'll definitely give it a crack after I've ironed out texture mapping and anti-aliasing.

That's about it... and this took a lot longer to write than I thought it would; it's now nearly 5am. Delightful.