Today I've spend some time improving my tools and they're much simpler and faster than before, it only takes about 20 seconds to convert a quake4 level into a 16384x16384 virtual texture file and a separate geometry file.
I'm quite happy with the tool as it is, there are only a handful of things i still want to do such as trying to rotate an allocated texture space to see if it'll fit better and forcing a texel density to some sane (maximum) limit.
Before i used 128x128 pages, and used texture-arrays (which are basically 3d textures where the z direction alway uses near filtering) with a depth of 512 layers.
This worked out pretty well because you never have any bleeding artifacts between pages.
It is possible to have visible seams between 2 pages that are rendered next to each other if they have a strong enough contrast right on the edge between them, the seam looks as if it's an aliased edge.
However i would consider it extremely rare, i only managed to see such a seam when i purposely made some handmade pages to see if it would happen at all, i couldn't find one when i looked at it in more real-life artwork.
So today i tried using smaller pages, but this caused some problems.
First of all, since i make the pages smaller (say 64x64 or 32x32) it uses less texture space, therefore i need more pages for the same amount of texels.
In theory, smaller pages should be able to match what i render on screen more precisely.
However, since my texture-array has a hard limit of 512 layers, even when the width/height is 4x smaller, i had no choice but to create a version of my texture cache that works with a giant 2d texture.
I haven't bothered to put borders around my pages (yet), so there are plenty of artifacts rendering it like that.
But when i started rendering lots of other artifacts started popping up, which apparently where more likely to happen with smaller pages somehow.
So i fixed a couple of these artifacts, some which helped improved performance, and i still have a couple of mysterious ones left.
Eventually i'll probably build something where i can record and replay a certain path trough my test level and i would use it to compare all the different parameters that can be used to build and render a virtual texture and see which ones are more efficient compared to the other.
This will also help to measure performance improvements, or the reverse, when i'll try to implement stuff like texture compression.
Unfortuneatly i probably won't have too much time working on this in the near future, so i'm kinda unsure if i should start on a CSG preprocessor for the level at this moment, because it'll take a couple of days to build and test.
On the upside I actually managed to get into NVIDIA's "GPU Computing Registered Developer Program"!
Which means i have access to OpenCL, which i would like to experiment with, to see if i can use it to optimize virtual texturing.
I can imagine that determining which pages are currently visible, could be done more efficiently trough OpenCL.
It could be done mostly on the GPU, saving CPU time, and would reduce the amount of data to be downloaded back to the CPU.
Another thing it could help with would be to improve texture decompression speed.