I'm still recovering from an insane amount of deadlines, and in the process I neglected my blog. (so sorry! *cries*)
Anyway.
Some time ago I received an email from Julian Mayer about the thesis he wrote about Virtual Texturing, but until now I hadn't had any time to actually take a look at it. His thesis is a comprehensive list of all the (publicly known) virtual texturing implementations and their implementation details, combined with his own test implementation and performance testing and heuristics.
Considering it's size and the amount of details, Julian must've put an insane amount of time and effort into it!
It's well worth the read, if you're interested in virtual texturing you should definitely check it out!
The only criticism I have for his thesis is that he measured his timings in fps instead of ms, ironically he mentions in his thesis that ms is a better metric, which makes me wonder why he didn't just use ms instead ..
Also, he mentions that the borders of the pages need to be stored with the pages, which isn't true. As I mentioned before you can create the borders from the pages you already have in memory, at the cost of a little bit of extra bookkeeping and copying of textures fragments.
Another interesting thing he mentioned is the idea to have some sort of PVS for pages, something I've been thinking about as well (I may actually have mentioned something along those lines before). In J.M.P. van Waveren last presentation about virtual texturing he mentions that they do some brute force page visibility determinations to determine which MIP levels of which pages can possible be visible to the player, and which ones cannot.
I imagine they do this by rendering the scene in all directions from all points within the area where the player can walk, jump & fall, obviously with a minimum distance between the points. This way you basically know which pages are visible from each point in the map and theoretically, after massaging this data somewhat, you could create a system where you can figure out which pages you'll (potentially) need soon, and which pages can safely be discarded from the cache. For moving & dynamic objects the same concept can be used, except a distance metric should be used to determine which pages are required to properly render the object.
This information can also be used to group pages together in blocks on disk, so that they can be loaded together, decreasing latency. Obviously you'll need to be able to cache more pages because you'll always be loading more pages than you need, but because you can better predict which pages will potentially be visible soon, this doesn't have to be a problem.