Exams are done and dusted! It's similarly epic to defeating the final boss in a game. So far I have gotten back my mark for Software Engineering; ~80%, so a D. Quite chuffed. So, besides basking in many games of Team Fortress 2, I've been noodling around ideas for what I should focus on next with Synergy.

I did some light research into physics engine libraries that are out there, and the selection is extensive. I was impressed. At this point the prime candidate looks to be Bullet. Failing that, perhaps ODE. I made an attempt to compile some of the many MSVC++6 projects that come with Bullet's SDK, some worked, some didn't. There's no binary currently in existence for the library, so I don't know exactly how to proceed. Investigate the forums, I suppose...

I decided to leave that kind of expedition to a later date, primarily because I had a headache and just didn't feel like I had the energy for it. Instead I decided to perform some optimisation on the 3DS loader function, which is still employing the crutch of the brute force algorithm to calculate smoothed normals, resulting in something close to 5 minutes stall to load in a 50000 polygon, single smooth group (worst case problem) model.

After chewing over the problem for a little while, I sketched out a rough algorithm that would run in O(fv) time, where f = number of faces and v = number of vertices. After implementing it, I found the algorithm `almost' works. Satisfyingly, however, it took about 2 seconds to load in that same 50000 poly model! A very significant speed increase. Conceptually, what the algorithm `should' be doing is exactly the same as the brute force algorithm, but somewhere in the implementation there is a deviation which I have yet to isolate.

I thought the incorrectness may have stemmed from the fact that 3DS files export duplicate vertices for the same coordinate in space. To remove this potential threat to the algorithm, I set about implementing a welding algorithm that would merge duplicate vertices into the one vertex list entry, thus compressing the total number of vertices in the mesh.

This algorithm posed some very interesting challenges. The root challenge is that it could not be less efficient than the proposed smoothing algorithm (O(fv)). From this root challenge spawns many other challenges as a result. The algorithm I sketched out is quite straight-forward, but the implementation proved to be a little trickier. The main issue was `how do you compare vertices?'

I divined the answer to this question whilst going to the toilet (details, details...). Essentially the order you compare the components of a vertex depends upon the order you sort them in, which seems obvious upon reflection. With a newly crafted comparison function, I had myself a linear-time welding function.

However, this welding function did not fix the smoothing algorithm. I've decided to leave further debugging until I've had a night's sleep, nevertheless progress is promising!

EDIT (14:14, 27/06/2008): The algorithm is working! I was missing a test for the final pass on indices to assign new normal values...I wasn't testing if the face was part of the smoothing group! How obvious. Amazing what some sleep can do. The code for the linear-time normal smoothing is pretty decent.

I've also had a look back over the mesh rendering function, and split the function depending on which shade model is enabled, meaning that rendering in Flat mode will use the face normal, rather than the normal of the first index in the face, which gives incorrect results.

I'm very happy that I managed this speed increase! Now I can consider physics integration with a clear state of mind.

Chimps

7 months ago