Last weekend I spend a little more time on my VT experiment and added normal, specular and alpha channels to pages, all of which area only stored when they're actually used. I've also added alpha testing support, since id tech 4 makes heavy use of it, which made me split the geometry into two stages. I'm not entirely sure if it's a good idea to support alpha testing in a real world virtual texturing engine (especially when an engine is using deferred rendering), so I might remove it eventually.
My short term goal is to render this into a G-buffer and do some simple deferred rendering (just a naive, simple deferred rendering implementation will do for the time being).
I've also spend some time optimizing my import pipeline, and each stage (a separate executable) only takes about a minute (at most) except the stage that creates the virtual texture itself. That final stage doesn't scale very well and it can go from a minute to 30 minutes depending on the size of the virtual texture. This is mostly because of the part that creates the mipmaps for each page, this is mostly because I'm not using simple box filtering but using some more better quality solutions instead.
I'm considering just using pre-mipped source textures for this since it would bring back the vt creation time to basically the time it take to write it to disk, but it would snap all my pages to 128x128 coordinates, which could create a lot of additional wasted texels in the worst case.. It would also be a temporary solution since eventually I want to re-parameterize the geometry and render the textures into this newly created texture space. It won't allow me to re-use as many pages as I am now (which makes it possible to store a 6gb virtual texture in 500mb, uncompressed) but it causes less wasted texel space, simplify geometry and most importantly help with page locality (fewer pages visible at the same time), which would help latency wise.