Monday, January 4, 2010

First post of the year

Happy new year!
Let's all hope this decade will lead us all towards a better path compared to the last one.

Lately most of my time, that I haven't spend with my family, has gone to my secret project™, but between all this I managed to sneak in some time for my other projects.

Real-time CSG

It's not full proof yet (there are some issues with duplicates), and only 2d (the basics are the same), but I've managed to come up with an algorithm that can perform CSG on each brush individually with a full CSG-tree implementation.

My old algorithm just added brushes together in a list.
Subtractive brushes where basically inverted and would work automatically.
The adding of subtractive brushes and additive brushes only differed when it came to brushes touching each other.

I'll explain the new algorithm it in detail eventually, when I finish it, but the basic thinking behind it is that with any CSG operation you perform on any mesh, that the resulting mesh will ALWAYS have the same polygons or with some pieces of these polygons removed.

This means that you never ever have to create new polygons, you just need to figure out which parts will need to be visible, and which polygons need their vertex order inverted.

If you consider a CSG-tree to be a sort of iterative query, where for every polygon you go trough the query and determine which parts of it are inside, outside or touching, then eventually you'll end up with all the parts that are visible.

Update: Figured out that if you give each node an index and pass the index of the node the current polygon belongs along with you while you traverse the CSG-tree.
Then you can compare this index with the current brush you're splitting with, which makes it possible to decide which overlapping polygon must be discarded, and which one must be kept!

Surface caching

Spherical harmonics won't work for storing specular highlights in a surface cache, I'm hoping the "half-life 2 basis" explained in the "half-life 2 shading paper" might do the trick.
I'm afraid it won't be accurate enough though.

There are also some really annoying inefficiencies in the whole virtual texture/surface cache idea, like having to render the geometry position into a buffer, which would require anything that moves to re-render it's position.
This could perhaps be done only once, when rigid bodies are concerned, by transforming the light instead of the world-positions of the texels.
This won't help when skinned characters are concerned though, they will need their positions rendered into the page cache and lighting re-calculated every frame.
I wonder if there's a way around this.


  1. Hey Sander, I've also become interested in this surface caching idea since looking in to virtual texturing.

    I like the idea of being able to calculate lighting separately, like deferred rendering, but having it in texture space so you can draw lighting per-light, per-object without having to draw the entire object multiple times and you don't have issues with screen space buffers, like antialiasing.
    The idea doesn't lend itself very well to a dynamic scene as you said (at least the benefits of the cache part), with dynamic lights and objects, unless you can afford recalculating much of the data each frame (which might not be so bad).

    I think you'll find HL2's 3 direction lighting technique worse than spherical harmonics and only suitable for diffuse lighting, you'll really need to store a lot of directional data for a reasonable looking specular.

    One way you could do it, is to draw a grid of dual-parabaloid maps in to the buffer, every 64x32 (2:1) or so pixels, you'd need a front and back (hence 2:1) for lighting and a front and back for reflections/specular (so 4 hemispheres all up (making it 64x64). You could draw additively to the buffer for each light source by drawing simple quads for each page in range of the light, then to get the lighting at a pixel, use the surface normal and reflection vector to sample the two corresponding parabaloid maps and you have light and reflections.

    I tried a similar idea for baking lighting and environment reflections, it worked quite well. The main drawback is having 32x32 or larger parabaloid maps means the samples will be much more sparse, so if you have a light with shadows, drawing it to the buffer will mean really low res shadows bound to the 'lightmap' resolution (but still higher frequency lighting and reflections if the surface normal varies across the 'luxel'.

    Good luck with all your projects,

  2. Sorry for the late response, but I'm currently trying to do 4 projects at the same time.
    Your idea is intriguing although it sounds lower res than I would like. Are you sure about the half-life basis?

  3. No problem, I know the feeling.

    They only use it for diffuse lighting in source, and they use cubemaps for specular and environment reflections. Even for diffuse lighting it does stand out a bit, not as convincing as spherical harmonics (but also not as expensive of course).
    It's basically the equivalent of compressing the light from every direction in to three directional lights with constant directions, so you could add specular but it's not going to look like it's coming from the right direction (or one light might even blend across the three samples and create three specular reflections from a single light) and with multiple lights you'll get light colours blending together.
    That said, it might not look too bad on a rough surface, or with a fairly soft specular reflection.

    Since using spherical harmonics requires a fairly small sample size it's unlikely you'll ever need much more than half the sphere even for a rough surface, so what about some sort of tangent-space hemispherical harmonics? You might be able to get more accurate speculars using the same number of coefficients.

    Just had a quick google and found this.

    Will have a look in to it myself when I have some time.


To you spammers out there:
Spam will be deleted before it shows up, so don't bother.