Yesterday I spend a little time on my VT project, fiddled with priorities and got a good speedup by rewriting my threading code.
I wasn't lock-free data structures before, and now that I am things are much better.
That said, I did notice that in my profiler i'm now spending *a lot* of cycles in my lock-free data structure.
I'm assuming this is because the IO thread is often waiting for things to load.
I've now capped my IO thread queue to x items, assuming that before the thread would ever load the last texture in the queue it would've already been supplemented with new items anyway.
This way the response time is increased, and i don't run the risk of infinitely growing my queue.
This is still a part that i'm experimenting with, and i still need to try a lot of the stuff that's been suggested to me in the comment section of last couple of posts.
Once again, my test data is far from optimal.
In a perfect situation you'd have roughly 1:1 pixel to virtual page texel ratio, but i sometimes need 32x more than that.
This is because of lots of textures 'stamped' onto the geometry, and lots of tiny slivers of geometry that use their own unique textures.
This makes me believe that there are two ways to building virtual texture content.
One is that you build your virtual texture as a big texture-atlas, where although the textures themselves are only stored once (subdivided into pages), the pages themselves would actually be used many times in the indirection table.
UV coordinates would be calculated to take maximum advantage of this.
It might be somewhat tricky to do with multiple mip-levels, but it would speed up file IO and would remove some of the pressure on the page cache.
The UV coordinates would require more area, and the more uniquely textured area you have, the less efficient this would become.
This technique would be very close to regular, non virtual, texturing approaches, but would basically give you automatic texture management.
The other approach is to uv-unwrap all the geometry, avoiding overlapping unless the geometry truly lies on the same plane in world space.
The new UV coordinates would have to be aligned as much as possible with the original UV coordinate axis, to avoid the texturing looking different.
After this the original UV coordinates, and the textures belonging to each piece of geometry, would need to be rendered into the virtual texture.
This would more easily remove all the stamping problems I'm having, since the 'stamp' textures would be rendered on top of the original geometry (it would require some sorting though, somehow), and improve page locality.
Texel density would also be uniform across the geometry, which can also be a bad thing, lo-res textures could potentially use much more memory (like cube-maps which are processed as regular textures).
It would also be harder to re-use identical pages with this technique, since the chance that 2 pages are identical would probably be much smaller.
Also I tried rendering all my textures transparently, just to see how it looked, and I realized that using the readback approach to discover which pages has a serious defect: it can't look beyond the first surface, and since you might not have a page loaded yet, this won't even work with texture masks.
This is clearly a situation where an analytical approach would be superior.
Alas, I won't be able to spend as much time on virtual textures as the last couple of weeks.
I'm going to work at a client for the next two months, instead of at home, so the little time i can find will be fragmented at best.