Tuesday, May 11, 2010

More random thoughts

Random thought 1: GJK / backface culling

So I was reading another chapter in my "Game Engine Architecture" book, written by Jason Gregory (which, incidentally, is a really good book), about Physics or more specifically, about the GJK algorithm.

I've seen the algorithm mentioned before but only glaced over it, not realizing how useful it is, but now I do!
It's very useful for doing intersection tests for my CSG algorithm.

Part of the GJK algorithm is finding a point furthest along a particular direction within a convex polytope (which is, basically, the same thing as a CSG brush).

Maybe they're already doing this in real life, but I couldn't help but wonder, why aren't they backface culling polygons (and all it's vertices) using this direction vector?

Update: Giving this some more thought makes me think that this won't work.
When you have relatively few points then all the back face culling operations would take more cycles compared to just going through all points. Yet if you have lots and lots of points you end up having more branching and CPU-cache misses. So ignore this idea.

I wonder if storing vertices into 8 octants, where each octant is a particular direction, and then using the signs of the x,y & z components of the direction vector to pick an octant would work.


Random thought 2: CPU Cache & Component entity systems

Couldn't help but think that component entity systems, data orientated design (optimize for CPU cache), block based memory allocations, splitting all the work into tasks (for easy multi-threading) would work really really well together, if all where designed together.

After all, component entity systems have lots of lists of similar components on which systems work.
This is exactly what you'd want to be doing with data based design because it's cache friendly, and it also allows you to break things up into small tasks within the system..

Of course entity systems are traditionally only used for the gameplay code, and usually written in a very flexible (and potentially very slow) way, but I'm wondering if the entire engine could be written around these principles..


Random thought 3: Each VT Page mip should decrease in size

In 'traditional' virtual texturing a lower resolution mip of a page contains several lower resolution versions of higher resolution pages, so that each page, no matter what mip, has the same size (for example, 128x128)

This is useful because you can more easily mix different pages of different mip levels within the GPU page cache, and this is fine.

I've been wondering though, should we really be doing the same on the disk side?

After all, we could just update the pieces of a lower resolution mip page in the GPU cache, instead of loading the entire mip, which would include an incrementally larger virtual texture area the lower resolution the mip level is.

This would all not be a problem if a virtual texture would just be a texture, but it isn't.
It's a texture management system, or a texture atlas at best.

There are a lots of seems within the virtual texture where individual mip pages contain pieces of texture that, within the level, can be very far apart.

Only loading the pieces of the mip pages that we need would reduce loading pixels that we're not going to use anyway.

It also makes it easier when compressing the pages, because we can more easily omit channels when they're not being used by a page.
Before, any used channel would travel up the mip map chain, because one of the higher resolution mips would be using it.

This is all, of course, not an issue for all the pre-cached pages.
And, in all fairness, the highest resolution pages are always the biggest problem, and this would only optimize loading the lower resolution pages.

All the above issues could also, potentially, be solved by grouping pages within the virtual texture in a more optimal way, but you'd probably end up with a lot of wasted space.

Also compressing pages in 8x8 blocks (or something) and omitting unused channels in those individual blocks would help to avoid the above problems too, and would probably be a good idea anyway.


Random thought 4: Compressing the mip map chain

I'm also wondering if there are good ways to compress an entire mip map chain together in context, considering that there isn't likely to be too much difference between two mip-map levels.

We're already pre-caching several mip map levels... so, can we leverage that information so that we can decrease the disk size of all the pages that we load on the fly?