I mentioned an idea in the past where lighting would be performed into the GPU side cache of the virtual texture.
This would have some advantages, such as when you render to the scene, you basically only need to render a single texture, have perfect filtering and being able to render everything using standard anti-aliasing techniques (compared to, for example, deferred rendering).
Blur pages the right way and you can fake skin subsurface scattering..
Proper transparency with lighting would be a lot simpler too!
Of course there are downsides as well.. (Can you say post processing?)
One of the bigger fundamental problems with this approach is that you need to, somehow, know the positions of your pixels within your cache texture.
Before I was thinking along the lines of rendering into the cache texture using predefined batches of geometry per page.
Which requires a lot of preprocessing, costs a lot of memory or complicated schemes and would require you to either render the same batch to different mip levels, or have monolithic batches (where you'd basically be rendering EVERYTHING) for the highest mip levels (which would be ridiculous).
But now I realized it might be better to limit each triangle to a single page, and then to render the scene into the cache texture using the indirection table to transform the triangle to the right page with the right size!
It has some annoying issues though, it needs per vertex texture coordinates or a geometry shader, but it should work.
The caching of pixels-in-page-coordinates wouldn't be as straight forward either.
That said, I consider this surface caching idea just a (fun!) thought experiment.
For anything serious, I'd use a deferred renderer!