Well, I've been largely preoccupied with the struggle between procrastination and obligation that is typical of my student career. For the most part, the only things productive I have been working on have been my Systems Programming assignment, which essentially involves developing a shell for UNIX.
Outside of that (and largely useless activities such as online gaming), I have tried to find time and motivation to work on some kind of creative activities. I'm trying to avoid diving into my large projects (such as Synergy) for a few reasons:
- it is mentally draining and involving to work on them, and I need to maintain my mental stamina for academic pursuits.
- I'm learning a lot of very pertinent programming methods in my study this semester. Methods which can apply directly to improving the operation of Synergy. As such, I want to wait until I have grasped this knowledge more fully before I apply it to my personal projects.
So my focus has been towards finding creative activities which can still contribute to the development of my overall skill set and ultimately the development of my portfolio of work, but are light-weight activities that I can pick up, work on for a set amount of time and then put down and return to my study.
Ultimately, this has lead me doing a lot more artistic activities, such as drawing. Nick introduced me to a very useful site called `Posemaniacs'. Head over there if you want to see what it's about. You don't want to? Fine, I'll explain: it is a collection of browser-embedded tools for human figure sketching and study. My link leads to the 30-second drawing section, which as you might suspect, involves the presentation of figures in poses for only 30 seconds at a time before changing, promoting rapid sketching and utmost line economy.
I've only attempted a couple of sessions, undertaking a more reasonable [for me] configuration of 60 seconds. Here's how they've turned out:
And a slightly less deplorable collection:
I also partook in a competition over at GeekFanGirl, where entrants had to either draw a scene or character from an active role play, or write short story about it/them. Turns out, I was the only one to do an art entry. I drew one of the main NPCs from my own RP I GM, the wild druid, Taech:
There was a deadline for entries, and mine made it with 40 seconds to spare. To put it another way, this entry was largely rushed. This was a positive thing, in my mind, because I learned a lot about being economical with my work flow. I'm irritated with a lot of things in the piece though, such as the messed up lines and artifacts around the silhouette, the skull structure being a complete deviation from what I wanted, and the completely schizophrenic rendering styles across the image.
Still, I'm pretty proud of the results, either way. I refuse to alter it, leaving it as a record of what I could accomplish in the time I had for working on it.
So that's about it. I'm hoping to make the 60 second sketches a regular, if not daily activity. If I can manage that, then I believe I am well on my way to becoming a competent artist.
Welcome
Hello.
Let's have fun together. I'll create things and you observe me.
This is a blog detailing all the projects I work on. It's a record of where things are at and a pin board of small random bits and pieces of creation.
I share anything useful I've come across during development, so if you're trying to solve a similar problem checking the labels on the right may be of assistance.
Feel free to leave a comment. Also, please take note that 90% of these blogs are compiled at 3 in the morning. The true hour of the day.
Enjoy your stay.
-Ryan
Let's have fun together. I'll create things and you observe me.
This is a blog detailing all the projects I work on. It's a record of where things are at and a pin board of small random bits and pieces of creation.
I share anything useful I've come across during development, so if you're trying to solve a similar problem checking the labels on the right may be of assistance.
Feel free to leave a comment. Also, please take note that 90% of these blogs are compiled at 3 in the morning. The true hour of the day.
Enjoy your stay.
-Ryan
Thursday, 4 September 2008
Labels:
sketches
Friday, 25 July 2008
Lightmaps cont.
Well, it's been a little while since there has been any progress of note on any of my primary projects, but today I finally felt motivated enough to jump back in to work on Synergy.
What have I been up to in the mean time? After having a game of a WarCraft III map called `Warchasers' at Ben's place, I felt compelled to make a much improved clone of its gameplay. I worked obsessively on this clone, titled Netherworld Odysseys. It has become quite sophisticated, with a generalised random loot system, random monster spawning, character saving, charged item stacking, multiple difficulty levels including difficulty scaling with number of players, door/key system and Diablo 2-style mercenaries. Here's a couple of screenshots just for fun:
Unfortunately the map was foremost designed as a multiplayer experience, but due to a number of technical shortcomings saving and load of characters became a necessary process to keep the map running, and saving/loading of data only works for single player. This in effect rendered the map a single-player only experience, killing my enthusiasm for it.
However, it's not a complete loss. The triggers I developed for it are highly modular in nature and much of the cool features of the map can be reused in other warcraft maps of mine. If I find time and enthusiasm I might return to it and see if I can improve some of the technical issues crippling it currently and bring it back to a playable state for multiple players, get it to a satisfying play length and release it. Until then it festers on my hard drive...
So! What did today bring? I admit that I had become horrificly bogged down with the lightmap functionality of Synergy and thus suffering a fetid case of Coder's Block. The lightmaps needed to be divided into correct patches over the level geometry to give an unstretched, unwarped surface for the lighting to exist over.
Such functionality is non-existent in Synergy in its current state, and my coder's block prevented me from proceeding further with the matter. Thus, today was spent researching methods for patching, and other methods of lighting a scene.
I ended up reading up on radiosity methods, and the literature revealed a great deal of insight to me. Such was the extent of my new-found understanding that I went back to the problem of patching with much clarity and began to sketch out a patching algorithm for Synergy (based largely upon this example).
I'm still going through it, and will continue work on it tomorrow, but as it stands currently it seems most satisfactory. Upon implementation I expect Synergy's light mapping to be devoid of artifacts, uniform and correctly aligned. From there it's a very simple extension to add in some brute radiosity.
Here's hoping there's no hitches.
What have I been up to in the mean time? After having a game of a WarCraft III map called `Warchasers' at Ben's place, I felt compelled to make a much improved clone of its gameplay. I worked obsessively on this clone, titled Netherworld Odysseys. It has become quite sophisticated, with a generalised random loot system, random monster spawning, character saving, charged item stacking, multiple difficulty levels including difficulty scaling with number of players, door/key system and Diablo 2-style mercenaries. Here's a couple of screenshots just for fun:
Unfortunately the map was foremost designed as a multiplayer experience, but due to a number of technical shortcomings saving and load of characters became a necessary process to keep the map running, and saving/loading of data only works for single player. This in effect rendered the map a single-player only experience, killing my enthusiasm for it.
However, it's not a complete loss. The triggers I developed for it are highly modular in nature and much of the cool features of the map can be reused in other warcraft maps of mine. If I find time and enthusiasm I might return to it and see if I can improve some of the technical issues crippling it currently and bring it back to a playable state for multiple players, get it to a satisfying play length and release it. Until then it festers on my hard drive...
So! What did today bring? I admit that I had become horrificly bogged down with the lightmap functionality of Synergy and thus suffering a fetid case of Coder's Block. The lightmaps needed to be divided into correct patches over the level geometry to give an unstretched, unwarped surface for the lighting to exist over.
Such functionality is non-existent in Synergy in its current state, and my coder's block prevented me from proceeding further with the matter. Thus, today was spent researching methods for patching, and other methods of lighting a scene.
I ended up reading up on radiosity methods, and the literature revealed a great deal of insight to me. Such was the extent of my new-found understanding that I went back to the problem of patching with much clarity and began to sketch out a patching algorithm for Synergy (based largely upon this example).
I'm still going through it, and will continue work on it tomorrow, but as it stands currently it seems most satisfactory. Upon implementation I expect Synergy's light mapping to be devoid of artifacts, uniform and correctly aligned. From there it's a very simple extension to add in some brute radiosity.
Here's hoping there's no hitches.
Labels:
Coding [Lightmaps],
Netherworld Odysseys,
Synergy
Tuesday, 8 July 2008
Let There Be Lightmaps
Lightmaps are officially a nightmare to implement. They seem easy, but I have found otherwise. The last four days have been a battle, but I have made significant progress in implementing this feature in Synergy.
I began with the code outlined in this tutorial. A number of modifications needed to be made, including accounting for multiple light sources and using triangular surfaces instead of quads. I also factored lambertian calculation into the formula. Adhering to the tutorial strictly, plus the minor modifications I made to the formula, I ended up with this strange result:
The streaks were caused by a value overflow with my premature byte casting. Working in dwords and then casting to byte after the clamp got me to this (after adding a couple more lights):
So it's obvious to see here the mapping coordinates are wrong. After systematically trying every single combination of texture coordinates for each vertex I discovered the closest result was this:
This implied the texture needed to be flipped on one axis, so I changed the order the data was input into the bitmap, and lo and behold I had this:
Although its close to a correct solution, it's easy to see there are a number of artefacts present in the lightmap, such as the lit borders and holes in corners. All of the problems become much more visible viewing the lightmap in isolation:
Higher resolution lightmaps such as 64x64 mitigate the problem, but cut deeply into the framerate. I'm still considering how I can overcome this problem, but I discovered a number of documents and papers on the subject of lightmaps, so I'll be taking a read of those and hopefully gain a bearing.
I needed to implement a basic light management system in order to support all of this process, and it has lead to some rather nice results for the lighting overall in the demo. The main improvement is the addition of attenuation constants for each lightsource which coincide with those used to calculate the lightmap, leading to nice consistent lighting:
And now for something that isn't to do with Synergy engine! I was talking to Nick online and he suggested we have a go at doing some sketching together using opencanvas. I've had my head so deeply in coding territory the last few days that it was a little difficult jumping back into drawing mode, but after roughly 15-20 minutes I had this horrible looking sketch of a person that resembles a damn night elf:
It's ok in some respects, very mediocre in others. Working with the construction was extremely cumbersome, and has highlighted the fact that I need to engage in more studies and revision of facial construction and anatomy theory. Still it was a good process to work through, and having Nick on hand to point out issues as they manifested was a new and interesting experience. We've agreed to spot each other's work (presumably studies) from now on, which I'm looking forward to.
I began with the code outlined in this tutorial. A number of modifications needed to be made, including accounting for multiple light sources and using triangular surfaces instead of quads. I also factored lambertian calculation into the formula. Adhering to the tutorial strictly, plus the minor modifications I made to the formula, I ended up with this strange result:
The streaks were caused by a value overflow with my premature byte casting. Working in dwords and then casting to byte after the clamp got me to this (after adding a couple more lights):
So it's obvious to see here the mapping coordinates are wrong. After systematically trying every single combination of texture coordinates for each vertex I discovered the closest result was this:
This implied the texture needed to be flipped on one axis, so I changed the order the data was input into the bitmap, and lo and behold I had this:
Although its close to a correct solution, it's easy to see there are a number of artefacts present in the lightmap, such as the lit borders and holes in corners. All of the problems become much more visible viewing the lightmap in isolation:
Higher resolution lightmaps such as 64x64 mitigate the problem, but cut deeply into the framerate. I'm still considering how I can overcome this problem, but I discovered a number of documents and papers on the subject of lightmaps, so I'll be taking a read of those and hopefully gain a bearing.
I needed to implement a basic light management system in order to support all of this process, and it has lead to some rather nice results for the lighting overall in the demo. The main improvement is the addition of attenuation constants for each lightsource which coincide with those used to calculate the lightmap, leading to nice consistent lighting:
And now for something that isn't to do with Synergy engine! I was talking to Nick online and he suggested we have a go at doing some sketching together using opencanvas. I've had my head so deeply in coding territory the last few days that it was a little difficult jumping back into drawing mode, but after roughly 15-20 minutes I had this horrible looking sketch of a person that resembles a damn night elf:
It's ok in some respects, very mediocre in others. Working with the construction was extremely cumbersome, and has highlighted the fact that I need to engage in more studies and revision of facial construction and anatomy theory. Still it was a good process to work through, and having Nick on hand to point out issues as they manifested was a new and interesting experience. We've agreed to spot each other's work (presumably studies) from now on, which I'm looking forward to.
Labels:
Coding [Lightmaps],
sketches,
Synergy
Saturday, 5 July 2008
Bits and Pieces 2
More relatively unremarkable tasks undertaken for Synergy today. Most of it is restructuring of code, generalising demo code into specific classes. This results in a fleshing out of the mesh/model class family, including the 2 new classes SynStaticModel and SynDynamicModel.
I've also added some code in SynMesh in preparation for implementing dynamic light map functionality in meshes' rendering and an animation skeleton data member for loading in animation information from 3DS files.
I also created the almighty SynEntity class, of which many other classes are derived. It's just a position and orientation in space with some wrapper functions for doing rotations easily, but that's something that's used over and over and it has tidied up the code considerably.
I also generalised texture loading functionality into 2 functions in the GfxCore class. This has made setting up textures a breeze.
That's about it, I suppose. I'm eager to begin implementing light maps as soon as possible. Hopefully tomorrow night.
I've also added some code in SynMesh in preparation for implementing dynamic light map functionality in meshes' rendering and an animation skeleton data member for loading in animation information from 3DS files.
I also created the almighty SynEntity class, of which many other classes are derived. It's just a position and orientation in space with some wrapper functions for doing rotations easily, but that's something that's used over and over and it has tidied up the code considerably.
I also generalised texture loading functionality into 2 functions in the GfxCore class. This has made setting up textures a breeze.
That's about it, I suppose. I'm eager to begin implementing light maps as soon as possible. Hopefully tomorrow night.
Labels:
Synergy
Friday, 4 July 2008
Bits and Pieces
I've been sucked into Diablo 2 and other distractions lately (damn you, Blizzard), so progress has been meagre. However, I was determined today to continue progress on Synergy. I have been deliberating over integrating a physics engine into the code, and the task is daunting.
I've decided to make the task as easy as possible by setting up everything the physics tests will eventually need and fleshing out the 3DS loading more. This entailed implementing some functionality that supports environments. There's alot of different features that fall under this umbrella, including lightmaps, materials, entity spawning, and many others.
I decided decent ground work for a number of these things was beginning work on extracting material information from the 3DS files. This task proved to be another work in cryptography, as only relatively vague information is at my disposal regarding the file format.
However, after a number of hours stumbling around in the dark, I was successful: the engine loaded in a model of a simple room that had separate materials for the walls and floor. I decided to manipulate the number in the demo code somewhat to make sure all the objects sit within this new room, in preparation for implementing the physics/collision detection. Here's how it all shaped up:
I've decided to make the task as easy as possible by setting up everything the physics tests will eventually need and fleshing out the 3DS loading more. This entailed implementing some functionality that supports environments. There's alot of different features that fall under this umbrella, including lightmaps, materials, entity spawning, and many others.
I decided decent ground work for a number of these things was beginning work on extracting material information from the 3DS files. This task proved to be another work in cryptography, as only relatively vague information is at my disposal regarding the file format.
However, after a number of hours stumbling around in the dark, I was successful: the engine loaded in a model of a simple room that had separate materials for the walls and floor. I decided to manipulate the number in the demo code somewhat to make sure all the objects sit within this new room, in preparation for implementing the physics/collision detection. Here's how it all shaped up:
Almost starting to look like a game?!
There's a lot more material information in the 3DS file I can potentially tap into, but it is not relevant at this stage in the engine. Next I will be focusing on loading in lights from 3DS files, and maybe, just maybe take a look at calculating light maps...
I've also devised an algorithm for welding points within a certain distance threshold of each other. Big deal, right? Well as far as my searching on google went, the approach appears to be brute-force O(n^2) algorithms, from what I could find (excerpt from game programming book). My approach was very simple; apply a 3-dimensional closest-pair algorithm to the vertices, with one modification: when the algorithm reaches an input of only 2 vertices, test if their distance is below the given threshold, if it is, weld them and return infinity for the distance.
This should in effect weld all vertices within the threshold in O(n log n) time, but I have yet to implement it and test its correctness.
There's a lot more material information in the 3DS file I can potentially tap into, but it is not relevant at this stage in the engine. Next I will be focusing on loading in lights from 3DS files, and maybe, just maybe take a look at calculating light maps...
I've also devised an algorithm for welding points within a certain distance threshold of each other. Big deal, right? Well as far as my searching on google went, the approach appears to be brute-force O(n^2) algorithms, from what I could find (excerpt from game programming book). My approach was very simple; apply a 3-dimensional closest-pair algorithm to the vertices, with one modification: when the algorithm reaches an input of only 2 vertices, test if their distance is below the given threshold, if it is, weld them and return infinity for the distance.
This should in effect weld all vertices within the threshold in O(n log n) time, but I have yet to implement it and test its correctness.
Labels:
Algorithm [Welding],
Coding [3DS loading],
Synergy
Friday, 27 June 2008
Final Battle!
Exams are done and dusted! It's similarly epic to defeating the final boss in a game. So far I have gotten back my mark for Software Engineering; ~80%, so a D. Quite chuffed. So, besides basking in many games of Team Fortress 2, I've been noodling around ideas for what I should focus on next with Synergy.
I did some light research into physics engine libraries that are out there, and the selection is extensive. I was impressed. At this point the prime candidate looks to be Bullet. Failing that, perhaps ODE. I made an attempt to compile some of the many MSVC++6 projects that come with Bullet's SDK, some worked, some didn't. There's no binary currently in existence for the library, so I don't know exactly how to proceed. Investigate the forums, I suppose...
I decided to leave that kind of expedition to a later date, primarily because I had a headache and just didn't feel like I had the energy for it. Instead I decided to perform some optimisation on the 3DS loader function, which is still employing the crutch of the brute force algorithm to calculate smoothed normals, resulting in something close to 5 minutes stall to load in a 50000 polygon, single smooth group (worst case problem) model.
After chewing over the problem for a little while, I sketched out a rough algorithm that would run in O(fv) time, where f = number of faces and v = number of vertices. After implementing it, I found the algorithm `almost' works. Satisfyingly, however, it took about 2 seconds to load in that same 50000 poly model! A very significant speed increase. Conceptually, what the algorithm `should' be doing is exactly the same as the brute force algorithm, but somewhere in the implementation there is a deviation which I have yet to isolate.
I thought the incorrectness may have stemmed from the fact that 3DS files export duplicate vertices for the same coordinate in space. To remove this potential threat to the algorithm, I set about implementing a welding algorithm that would merge duplicate vertices into the one vertex list entry, thus compressing the total number of vertices in the mesh.
This algorithm posed some very interesting challenges. The root challenge is that it could not be less efficient than the proposed smoothing algorithm (O(fv)). From this root challenge spawns many other challenges as a result. The algorithm I sketched out is quite straight-forward, but the implementation proved to be a little trickier. The main issue was `how do you compare vertices?'
I divined the answer to this question whilst going to the toilet (details, details...). Essentially the order you compare the components of a vertex depends upon the order you sort them in, which seems obvious upon reflection. With a newly crafted comparison function, I had myself a linear-time welding function.
However, this welding function did not fix the smoothing algorithm. I've decided to leave further debugging until I've had a night's sleep, nevertheless progress is promising!
EDIT (14:14, 27/06/2008): The algorithm is working! I was missing a test for the final pass on indices to assign new normal values...I wasn't testing if the face was part of the smoothing group! How obvious. Amazing what some sleep can do. The code for the linear-time normal smoothing is pretty decent.
I've also had a look back over the mesh rendering function, and split the function depending on which shade model is enabled, meaning that rendering in Flat mode will use the face normal, rather than the normal of the first index in the face, which gives incorrect results.
I'm very happy that I managed this speed increase! Now I can consider physics integration with a clear state of mind.
I did some light research into physics engine libraries that are out there, and the selection is extensive. I was impressed. At this point the prime candidate looks to be Bullet. Failing that, perhaps ODE. I made an attempt to compile some of the many MSVC++6 projects that come with Bullet's SDK, some worked, some didn't. There's no binary currently in existence for the library, so I don't know exactly how to proceed. Investigate the forums, I suppose...
I decided to leave that kind of expedition to a later date, primarily because I had a headache and just didn't feel like I had the energy for it. Instead I decided to perform some optimisation on the 3DS loader function, which is still employing the crutch of the brute force algorithm to calculate smoothed normals, resulting in something close to 5 minutes stall to load in a 50000 polygon, single smooth group (worst case problem) model.
After chewing over the problem for a little while, I sketched out a rough algorithm that would run in O(fv) time, where f = number of faces and v = number of vertices. After implementing it, I found the algorithm `almost' works. Satisfyingly, however, it took about 2 seconds to load in that same 50000 poly model! A very significant speed increase. Conceptually, what the algorithm `should' be doing is exactly the same as the brute force algorithm, but somewhere in the implementation there is a deviation which I have yet to isolate.
I thought the incorrectness may have stemmed from the fact that 3DS files export duplicate vertices for the same coordinate in space. To remove this potential threat to the algorithm, I set about implementing a welding algorithm that would merge duplicate vertices into the one vertex list entry, thus compressing the total number of vertices in the mesh.
This algorithm posed some very interesting challenges. The root challenge is that it could not be less efficient than the proposed smoothing algorithm (O(fv)). From this root challenge spawns many other challenges as a result. The algorithm I sketched out is quite straight-forward, but the implementation proved to be a little trickier. The main issue was `how do you compare vertices?'
I divined the answer to this question whilst going to the toilet (details, details...). Essentially the order you compare the components of a vertex depends upon the order you sort them in, which seems obvious upon reflection. With a newly crafted comparison function, I had myself a linear-time welding function.
However, this welding function did not fix the smoothing algorithm. I've decided to leave further debugging until I've had a night's sleep, nevertheless progress is promising!
EDIT (14:14, 27/06/2008): The algorithm is working! I was missing a test for the final pass on indices to assign new normal values...I wasn't testing if the face was part of the smoothing group! How obvious. Amazing what some sleep can do. The code for the linear-time normal smoothing is pretty decent.
I've also had a look back over the mesh rendering function, and split the function depending on which shade model is enabled, meaning that rendering in Flat mode will use the face normal, rather than the normal of the first index in the face, which gives incorrect results.
I'm very happy that I managed this speed increase! Now I can consider physics integration with a clear state of mind.
Wednesday, 18 June 2008
Give Me Some Skin!
After suffering an 08:30 exam and the apocalyptic effects it has on my sleeping patterns, I felt I might loosen the screws in my mind by taking another look at implementing texture mapping in Synergy.
My suspicions lay in the library I was using to load texture data into OpenGL. To test this, I downloaded and integrated SOIL. So it seemed I could load in textures, but they were not mapped at all to the meshes.
Which made me realise I had left the texture mapping aspect of the SynMesh class, well, hanging. So going back over that was a simple affair, though I had to make an educated guess as to how the .3DS file's texture mapping data is actually organised in respect to polygons. It is, of course, the obvious answer, which is for each vertex in the vertex list, the texture vertex at the same relative position in the texture vertex list is that vertex's texture coordinate.
A little tweaking of SynMesh.render(), and various other spots in the demo code, and I was in business:
Note, the skin for the tree actually isn't finished. So that's a data thing, not a processing error. Very satisfying, after the endless battle with AllegroGL just to do basic texture mapping. Tsk tsk. Here's a lit screenshot, just for fun:
My suspicions lay in the library I was using to load texture data into OpenGL. To test this, I downloaded and integrated SOIL. So it seemed I could load in textures, but they were not mapped at all to the meshes.
Which made me realise I had left the texture mapping aspect of the SynMesh class, well, hanging. So going back over that was a simple affair, though I had to make an educated guess as to how the .3DS file's texture mapping data is actually organised in respect to polygons. It is, of course, the obvious answer, which is for each vertex in the vertex list, the texture vertex at the same relative position in the texture vertex list is that vertex's texture coordinate.
A little tweaking of SynMesh.render(), and various other spots in the demo code, and I was in business:
Note, the skin for the tree actually isn't finished. So that's a data thing, not a processing error. Very satisfying, after the endless battle with AllegroGL just to do basic texture mapping. Tsk tsk. Here's a lit screenshot, just for fun:
Sunday, 15 June 2008
Criminally Smooth
:). I am pleased. It was a long, hard battle, with much contemplation, testing, probing and trial and error, but Synergy now sports Smoothing Group functionality!
Loading in the smoothing group info was a small mission in itself, one which began with trying to find info on how to get it out of a .3DS file. Implementing this actually spurred a revolution in my coding for being very specific about my longs (`32 or 64 bit?'), which seems to have somehow increased the frame rate of Synergy's various demos by up to 6Hz?!
Building upon the code from that tutorial I linked to in the previous post, the code for reading smoothing groups is about as simple as it gets.
I pondered for most of the day on how the process would actually work, sewing threads of ideas through the problem in my head, considering different approaches. What I finally ended up with was a brute-force algorithm with slight optimisations.
The road to reach this code was treacherous, though, with some interesting results along the way:
Loading in the smoothing group info was a small mission in itself, one which began with trying to find info on how to get it out of a .3DS file. Implementing this actually spurred a revolution in my coding for being very specific about my longs (`32 or 64 bit?'), which seems to have somehow increased the frame rate of Synergy's various demos by up to 6Hz?!
Building upon the code from that tutorial I linked to in the previous post, the code for reading smoothing groups is about as simple as it gets.
I pondered for most of the day on how the process would actually work, sewing threads of ideas through the problem in my head, considering different approaches. What I finally ended up with was a brute-force algorithm with slight optimisations.
The road to reach this code was treacherous, though, with some interesting results along the way:
A World With 1 Normal
This problem was isolated and corrected relatively quickly, basically I didn't have the indices' blank normals initialised to be blank normals. So fixing that brought me to this:
Revenge Of The Cubes
Great. So some things looking lovely, others looking completely screwed. This proved to be part of a bigger problem, it also created seams:
Almost Anatomically Correct!
And generally just lead to strange lighting behaviour on flat surfaces:
Colt 1911s Never Looked So... Jaded
Although for the most part that Colt 1911 model (made by Dylan) looks very very sexy, one can quickly pick out areas which are rendered incorrectly.
I found the problem to be that the first index in a smoothing group wasn't included in part of the normal calculation! So, fixing that Synergy could now do the Colts justice:
I found the problem to be that the first index in a smoothing group wasn't included in part of the normal calculation! So, fixing that Synergy could now do the Colts justice:
Mission: Accomplished
So, what's next? Study! It's time to stop getting sucked into my projects and turn my attention towards trying to get HDs for my end of semester exams. So, until next week I guess Synergy will have to wait.
Saturday, 14 June 2008
Meshin' Accomplished
Yeah I know, I'm the lamest person on Earth for using that title. But I don't care, I've succeeded too much today to feel anything but empirically chuffed. Despite cleaning up baby vomit at my Day Job, my time was devoted wholly to Synergy.
After working on the SynMesh and SynModel classes last night, I was keen to get a function running that could load in mesh data from some 3D model file format. After some research I eliminated the majority of options, including all formats without bone/keyframe animation, and all formats which aren't widely accessible. That is, not supported by a wide range of modelling programs. That left me with 3D Studio Max's .3DS format. Big surprise there.
Like all great things my quest for this functionality began with a google search, which lead me to a very useful tutorial. After some copying, pasting, and butchering of this code into a form suitable for my code, I began to, with breath guarded, integrate it into the cubetopia demo, replacing my hand-coded geometry information with SynMeshes loaded from a .3DS geosphere.
Some small tweaks here and there and BAM! Meshes loaded:
After working on the SynMesh and SynModel classes last night, I was keen to get a function running that could load in mesh data from some 3D model file format. After some research I eliminated the majority of options, including all formats without bone/keyframe animation, and all formats which aren't widely accessible. That is, not supported by a wide range of modelling programs. That left me with 3D Studio Max's .3DS format. Big surprise there.
Like all great things my quest for this functionality began with a google search, which lead me to a very useful tutorial. After some copying, pasting, and butchering of this code into a form suitable for my code, I began to, with breath guarded, integrate it into the cubetopia demo, replacing my hand-coded geometry information with SynMeshes loaded from a .3DS geosphere.
Some small tweaks here and there and BAM! Meshes loaded:
Trippin' Balls
I suppose it can't really be called `cubetopia' anymore. Perhaps gala would be more fitting. A number of issues were outstanding at this point. The foremost was that there were no normals! So I set about calculating normals upon loading each face. That lead to some interesting mistakes:
Feeling very satisfied with myself, I was eager to push the code and see just how far I'd actually gotten it to work. Next was a model of a tree that I was working on once upon a time, and to my delight it worked straight away, giving a pretty funky result:
Styx
I began to notice a significant delay in loading the 3DS file, so I decided to see how much this delay scaled with input size. I needed a high poly model... and then I remembered a skull I was sculpting in zbrush once upon a time. This lead to a huge delay of almost 2 minutes to load the file. It was obvious where the hangup was: the obscene amount of debug info I was printing to the console and the log file. I commented those out and the delay was vanquished, which lead me to the final product for the evening:
Synergy's High Poly Flat Shading v0.00001
Each skull is ~50k polygons, so theres 400k polys being rendered there. It is both a milestone and a map of the steps yet to take. The most obvious next step is to implement Smoothing Groups. For instance, here is the exact same model rendered in Max with smoothing groups on:
Big difference. So that's about it; excellent progress for 5 hours work, in my opinion. I hope to make the same leaps and bounds tomorrow, but we'll see...
Big difference. So that's about it; excellent progress for 5 hours work, in my opinion. I hope to make the same leaps and bounds tomorrow, but we'll see...
Labels:
Coding [3DS loading],
Synergy
Chilled
Working spasmodically over the last couple of days. Felt like I needed to back off a little bit, since my mind was beginning to race constantly, and it was affecting my sleeping patterns, which in turn affected my productivity. So work performed must be measured to ensure optimal output.
Chilled out now though, thanks to good old Elliott. The relaxed mind state has assisted in persistence and patience in moments of slow progress, such as being unable to crack the nut of OpenGL texture mapping for Synergy. I've gone over every line of code a number of times, and can find no deviation from the example code I have been observing, yet my code does not work.
My only conclusion is that it is some combination of operations I am doing, which on the their own are valid, but in sequence lead to failure of texture mapping. I believe it has something to do with the way I have set up materials in combination with the parameters I have provided AllegroGL to load the bitmaps with.
After spending a couple of hours here and there each day trying to crack it, I set it aside and turned my time towards more fruitful endeavours: models and animation. Progress has been going well with these, and I have the SynMesh and SynModel classes laid out, with a little bit of body to them. I'll be sinking my teeth into the gut of them sometime soon. I have a very clear idea in my head of how the entire Skeletal Animation Method works; a depth-first traversed tree of matrices which are pushed onto a matrix stack similar to OpenGL's method of rendering.
I've also been talking with Dylan about a test bed for Synergy. That is, a game built with it to test and showcase its features. Those are key goals for the game, but we will certainly not ignore the most important goal: to make an insanely fun, insanely cool game we (and others) can play.
The concept we decided on was made unanimously, silent and in parallel with each other. We basically said at the same time `It should be Galileo Complex!'
`What's that?' you ask. Well how do I not mutate this into a huge story... basically it's a game concept we came up with a couple of years ago, that we never really had the resources and faculties to do at the time, but was always going to be a cool idea. It's a top-down action game that is a blend between death match and RTS. You control a commander who leads a group of AI-controlled players, and you must complete objectives in order to progress through the current level you are on. Objectives change depending on what other commanders have done, and matches can be a back-and-forth tug of war over areas of the level and its various objectives and resources.
I won't really go into more detail, but that's the core of it. We are inspired by the elegance of Quake's Deathmatch (Quake 1 that is), and Total Annihilation's RTS combat and interface. So our feature list pools alot of things we observe these games exhibiting; a key one being the game should be extendible. That is, people should be able to mod is easily and in a fun way. There's lots of exciting ideas we have, and I'm itching to get the Synergy Engine up to a point where we can start actually sinking our teeth into this thing.
Originally I thought the game could be an orthographic, fixed camera angle viewport. That is, essentially a 2D game. We're thinking of going with perspective rendering now, but an interesting design idea I had for the 2D thing was character-sprites that had Normal Maps, thus allowing for real-time lighting to be calculated for them, giving them a very 3D appearance without having to rasterise polygons.
While waiting for my tea to brew, I paced the living room and devised a rough algorithm for calculating the normal-mapped sprite from a 3D model.
I haven't really checked it for correctness or efficiency (should be roughly O(xy)), but it feels pretty solid. Despite the fact we're probably going with perspective projection for Galileo, I'll still implement this feature in Synergy. We'll probably end up using it in a number of places in the game anyway; it's too cool not to fiddle around with.
Now, on to Zanath. Figured out some exciting things regarding terrain creation for this. It came about when I finally got around to exporting the terrain model I've been working on to Ogre3D's .SCENE format. Basically, when you export, you get a bunch of easily-editable text files that you can do alot of tweaking with. After checking out some stuff in various forums, I believe I can get texture-splatting happening for Zanath!
This has got me excited, since it means best performance, great detail and easy asset creation/editting. I still need to do alot of experimenting and such to figure out exactly how to do it, but I feel confident that it is entirely within reach.
Until I do actually figure it out, I have to export things using megatexture-like methods. But the result is pleasingly accurate:
That's about it I guess.
Chilled out now though, thanks to good old Elliott. The relaxed mind state has assisted in persistence and patience in moments of slow progress, such as being unable to crack the nut of OpenGL texture mapping for Synergy. I've gone over every line of code a number of times, and can find no deviation from the example code I have been observing, yet my code does not work.
My only conclusion is that it is some combination of operations I am doing, which on the their own are valid, but in sequence lead to failure of texture mapping. I believe it has something to do with the way I have set up materials in combination with the parameters I have provided AllegroGL to load the bitmaps with.
After spending a couple of hours here and there each day trying to crack it, I set it aside and turned my time towards more fruitful endeavours: models and animation. Progress has been going well with these, and I have the SynMesh and SynModel classes laid out, with a little bit of body to them. I'll be sinking my teeth into the gut of them sometime soon. I have a very clear idea in my head of how the entire Skeletal Animation Method works; a depth-first traversed tree of matrices which are pushed onto a matrix stack similar to OpenGL's method of rendering.
I've also been talking with Dylan about a test bed for Synergy. That is, a game built with it to test and showcase its features. Those are key goals for the game, but we will certainly not ignore the most important goal: to make an insanely fun, insanely cool game we (and others) can play.
The concept we decided on was made unanimously, silent and in parallel with each other. We basically said at the same time `It should be Galileo Complex!'
`What's that?' you ask. Well how do I not mutate this into a huge story... basically it's a game concept we came up with a couple of years ago, that we never really had the resources and faculties to do at the time, but was always going to be a cool idea. It's a top-down action game that is a blend between death match and RTS. You control a commander who leads a group of AI-controlled players, and you must complete objectives in order to progress through the current level you are on. Objectives change depending on what other commanders have done, and matches can be a back-and-forth tug of war over areas of the level and its various objectives and resources.
I won't really go into more detail, but that's the core of it. We are inspired by the elegance of Quake's Deathmatch (Quake 1 that is), and Total Annihilation's RTS combat and interface. So our feature list pools alot of things we observe these games exhibiting; a key one being the game should be extendible. That is, people should be able to mod is easily and in a fun way. There's lots of exciting ideas we have, and I'm itching to get the Synergy Engine up to a point where we can start actually sinking our teeth into this thing.
Originally I thought the game could be an orthographic, fixed camera angle viewport. That is, essentially a 2D game. We're thinking of going with perspective rendering now, but an interesting design idea I had for the 2D thing was character-sprites that had Normal Maps, thus allowing for real-time lighting to be calculated for them, giving them a very 3D appearance without having to rasterise polygons.
While waiting for my tea to brew, I paced the living room and devised a rough algorithm for calculating the normal-mapped sprite from a 3D model.
I haven't really checked it for correctness or efficiency (should be roughly O(xy)), but it feels pretty solid. Despite the fact we're probably going with perspective projection for Galileo, I'll still implement this feature in Synergy. We'll probably end up using it in a number of places in the game anyway; it's too cool not to fiddle around with.
Now, on to Zanath. Figured out some exciting things regarding terrain creation for this. It came about when I finally got around to exporting the terrain model I've been working on to Ogre3D's .SCENE format. Basically, when you export, you get a bunch of easily-editable text files that you can do alot of tweaking with. After checking out some stuff in various forums, I believe I can get texture-splatting happening for Zanath!
This has got me excited, since it means best performance, great detail and easy asset creation/editting. I still need to do alot of experimenting and such to figure out exactly how to do it, but I feel confident that it is entirely within reach.
Until I do actually figure it out, I have to export things using megatexture-like methods. But the result is pleasingly accurate:
That's about it I guess.
Tuesday, 10 June 2008
Kickoff
Here we are. This is my blog. This is where I write particular thoughts associated with moments in time, thus logging things I think and do.
But that's obvious. Sorry; writing as I think. *Ahem*
Let's be specific. This is a place where I log things I am developing/working on. That is, projects of mine. What sort of projects do I work on? Well, there's many sorts, actually. I'm either a renaissance man, or a jack of all trades; further data is necessary to decide just how pretentious I sound right about now.
Music, coding, illustration, 3d modelling, writing, designing. That's the basic summary. These are primarily geared towards the overall domain of `computer game development', but not necessarily.
Well, that's the Cliff's Notes, let's cease the blathering and begin the blog in earnest. I will begin by outlining the primary collection of my current projects:
Turned him into a monster! Satisfying, since I haven't really touched any digital art since the start of the semester (14 weeks ago, and I wasn't exactly in-form even back then, anyway). Felt good to get in on some illustration and actually do something that didn't frustrate me.
Zanath work comprised mainly of trying to figure out how the HELL to make a low-poly tree model that actually looks half decent. It's incredibly frustrating that very little information on this seems to exist on the internet (seriously, is this innate knowledge to our species that I'm just missing?). Here's some renders of where I'm at with it:
Let's ignore that the trees all have the same z-rotation. But you can see how glaring the foliage planes are. The trunks are simple, since I've been pouring my attention into the damned foliage. I feel like I am getting closer, though. If I can't break this block soon, I'll set it aside and turn my attention to things I know I can do, like rocks, walls, houses, turrets, etc. Basically anything that isn't a plant.
Also, you can see the glaring need for the terrain to have either a giant texture (megatexture I call it, though it has nothing to do with the id tech, that's just what I call that method), or texture splatting of some kind (multi-channel blending). Shadow decals or light mapping aren't possible, since part of the design spec is the game must have day/night cycles, so that ball is really in the coder's court.
Synergy saw some love! I had a second attempt at moving the geometry definition to Display Lists; no performance change (actually it almost went down 1Hz!). However, after I played around with the code execution sequence, moved some things around, simplified, I managed to turn that -1Hz into +10Hz! So I'm sitting on 40Hz now, which is satisfying. Curious what it looks like? Behold the cubes!
That's about it... and this took a lot longer to write than I thought it would; it's now nearly 5am. Delightful.
But that's obvious. Sorry; writing as I think. *Ahem*
Let's be specific. This is a place where I log things I am developing/working on. That is, projects of mine. What sort of projects do I work on? Well, there's many sorts, actually. I'm either a renaissance man, or a jack of all trades; further data is necessary to decide just how pretentious I sound right about now.
Music, coding, illustration, 3d modelling, writing, designing. That's the basic summary. These are primarily geared towards the overall domain of `computer game development', but not necessarily.
Well, that's the Cliff's Notes, let's cease the blathering and begin the blog in earnest. I will begin by outlining the primary collection of my current projects:
- Synergy Engine
This is a game engine I have been coding in C++ using the Allegro, AllegroGL (weaning off of this) and OpenGL libraries. It's very much in its infant stages currently. I called it the Synergy Engine primarily because that is how it began; simply a combination of many different fragments of code and classes that I had previously written, formed into a working system. Thus, the engine was devised under the concept of attaining a `power' that was more than the sum of its pieces, but rather from the synergy of their cooperation.
It originally was scoped to be primarily 2D in its functionality, but curiosity and to a lesser extent boredom bested me, and I began to focus principally on 3D rendering. So far it has been very fun, satisfying, educational and fruitful. What I have currently is 1024 lit cubes floating around in space at 40Hz, mouse-look and WASD controls for transforming the camera, console commands for controlling variables, error logging and some basic resource caching. Should be cross-platform compatible (though I definitely wouldn't stake my life on that). Not bad for someone who only vaguely knew the rendering pipeline conceptually before commencing the project. - Ultragraph
Fantastic name, I know. How do I begin describing this project without sounding like the hugest nerd in the world. Wait, we crossed that threshold long before now, so I'll just go all-out:
Ultragraph is a program that allows a user to model mathematical graphs in 3D, assigning custom data to vertices and edges, and performing a number of queries and operations on graphs such as topological sorting, traversing, spanning tree, circuit finding, path finding, travelling sales person, etc. I want to add a number of other tools, such as support for layers, hiding/showing of groups of vertices and operation/state history. I'm sure there's things I've left out.
The main purpose of the program is to allow for something I call Knowledge Management. Basically, we can use graphs with state histories and layers to model the `knowledge' of some entity, such as a character. Vertices contain data which may be as simple as a text string pertaining to some proposition that is held to be true by the entity's knowledge, such as `All men are mortal'.
Edges can represent a lot of different things, but principally I think they are best used for modelling causality between propositions. Perhaps logical dependency (that is if there is an edge (A, B), then B depends upon A being true. That is, if A is false, B must also be false. We could also use weighted edges to model probabilistic inferences.
`Okay,' you might be saying, `WHY would you want that?' Well, turns out, I think it's an extremely useful tool for role playing. Yes, I mean role playing as in sitting at a table with other people, and dice, and character sheets and `The pungent stench of mildew emanates from the wet dungeon walls,' style conversation. Well not exactly. More so the internet forum mutation of role playing; the non-real-time stuff.
If you are playing a highly intelligent, analytical character, having a mathematical model of all of their knowledge accumulated over the entire coarse of the role play, that you can query and use for inference, is something which will really cut down how long it takes you to make a post, and it will make your character behaviour a lot cooler, because you'll truly be playing a character that never lets a fact slip (unless you, the player, play that they slipped it).
So it gives you much more control. Anyway, there's many many other applications for such a program besides role playing; graphs are hugely applicable to all sorts of things. Anyway, that's Ultragraph. - Zanath
This is an 3D Action-RPG in the vein of Zelda:OOT. I know what you're thinking, `RPG = dead project walking'. Well, it's not really my project, I'm just on board for it, so that doesn't really bother me; if it survives, that's great; if it doesn't, I'll live.
I'm currently doing concept art for environments and modelling some basic terrain for engine testing purposes. As well as dropping my thoughts on game play and story
considerations pretty much every chance I get, which is quite often. - Music Project (Cable Sack, Obscura Child)
Title undecided at this point. Collaboration with my best friend and other associates. We have a solid concept that we are constructing and extrapolating from, and some interesting ideas on possible working process paradigms. Not much more can be said at this point, it's very young.
Turned him into a monster! Satisfying, since I haven't really touched any digital art since the start of the semester (14 weeks ago, and I wasn't exactly in-form even back then, anyway). Felt good to get in on some illustration and actually do something that didn't frustrate me.
Zanath work comprised mainly of trying to figure out how the HELL to make a low-poly tree model that actually looks half decent. It's incredibly frustrating that very little information on this seems to exist on the internet (seriously, is this innate knowledge to our species that I'm just missing?). Here's some renders of where I'm at with it:
Let's ignore that the trees all have the same z-rotation. But you can see how glaring the foliage planes are. The trunks are simple, since I've been pouring my attention into the damned foliage. I feel like I am getting closer, though. If I can't break this block soon, I'll set it aside and turn my attention to things I know I can do, like rocks, walls, houses, turrets, etc. Basically anything that isn't a plant.
Also, you can see the glaring need for the terrain to have either a giant texture (megatexture I call it, though it has nothing to do with the id tech, that's just what I call that method), or texture splatting of some kind (multi-channel blending). Shadow decals or light mapping aren't possible, since part of the design spec is the game must have day/night cycles, so that ball is really in the coder's court.
Synergy saw some love! I had a second attempt at moving the geometry definition to Display Lists; no performance change (actually it almost went down 1Hz!). However, after I played around with the code execution sequence, moved some things around, simplified, I managed to turn that -1Hz into +10Hz! So I'm sitting on 40Hz now, which is satisfying. Curious what it looks like? Behold the cubes!
Synergy - Cubetopia
Read over a lot of material about Shadow Volume methods for calculating real-time shadows. Looks fun! I'll definitely give it a crack after I've ironed out texture mapping and anti-aliasing.
That's about it... and this took a lot longer to write than I thought it would; it's now nearly 5am. Delightful.
Labels:
edits,
music,
Synergy,
Ultragraph,
Zanath
Subscribe to:
Posts (Atom)