Happy new year!
Let's all hope this decade will lead us all towards a better path compared to the last one.
Lately most of my time, that I haven't spend with my family, has gone to my secret project™, but between all this I managed to sneak in some time for my other projects.
It's not full proof yet (there are some issues with duplicates), and only 2d (the basics are the same), but I've managed to come up with an algorithm that can perform CSG on each brush individually with a full CSG-tree implementation.
My old algorithm just added brushes together in a list.
Subtractive brushes where basically inverted and would work automatically.
The adding of subtractive brushes and additive brushes only differed when it came to brushes touching each other.
I'll explain the new algorithm it in detail eventually, when I finish it, but the basic thinking behind it is that with any CSG operation you perform on any mesh, that the resulting mesh will ALWAYS have the same polygons or with some pieces of these polygons removed.
This means that you never ever have to create new polygons, you just need to figure out which parts will need to be visible, and which polygons need their vertex order inverted.
If you consider a CSG-tree to be a sort of iterative query, where for every polygon you go trough the query and determine which parts of it are inside, outside or touching, then eventually you'll end up with all the parts that are visible.
Update: Figured out that if you give each node an index and pass the index of the node the current polygon belongs along with you while you traverse the CSG-tree.
Then you can compare this index with the current brush you're splitting with, which makes it possible to decide which overlapping polygon must be discarded, and which one must be kept!
Spherical harmonics won't work for storing specular highlights in a surface cache, I'm hoping the "half-life 2 basis" explained in the "half-life 2 shading paper" might do the trick.
I'm afraid it won't be accurate enough though.
There are also some really annoying inefficiencies in the whole virtual texture/surface cache idea, like having to render the geometry position into a buffer, which would require anything that moves to re-render it's position.
This could perhaps be done only once, when rigid bodies are concerned, by transforming the light instead of the world-positions of the texels.
This won't help when skinned characters are concerned though, they will need their positions rendered into the page cache and lighting re-calculated every frame.
I wonder if there's a way around this.