Monday, June 8, 2009

Deferred Virtual Texture Shading

For some time now i've had this, admittingly pretty vague, idea about a combination of deferred shading and virtual textures (aka "mega-texture").

It hit me that most of the attributes stored into a G-buffer during deferred shading are already present in a virtual-texture, so if you combine the two you'd only need to store the virtual-texture coordinates, which would require smaller G-buffers.
Ofcourse that's not entirely true, since a pixel in a G-buffer almost never exactly match a texel on a texture; that's what filtering is for.

But what if we would use our virtual-texture cache as the G-buffer itself?

The problems:
  1. We'd need to fill some of the attributes of the virtual texture at runtime, like coordinates/normals of the texels. However, For static geometry this could be pre-baked, for dynamic geometry this could be cached. (Interestingly enough this sounds a lot like geometry images)
  2. Another big problem is how can we (efficiently) render the lights into this "virtual G-buffer"?
I'm not sure how the last problem could easily be done without losing all the benefits of deferred shading. It would definitely help to have some sort of hierarchy where virtual texture pages are matched with chunks of geometry.

That way we'd only calculate the light for lights who's radius touches those chunks of geometry and their respective virtual texture pages.
It would also make it possible to move the paging algorithm completely to the cpu side, that way we won't need a nasty readback from the gpu to discover what virtual texture pages to upload.

But if (a BIG if) these problems would be solved, it would give us some interesting properties:
  • Lighting calculations can be cached.
  • A virtual texture would be able to contain a lot more data, at a much higher quality, compared to G-buffer (which would potentially create a disk space problem)
  • Rendering the geometry would simply be a matter of rendering the geometry with a single texture.
  • Transparent surfaces wouldn't need to be rendered in a separate pass. (depth peeling?)
  • All shadows and lighting would automatically be filtered. (so you'd better have a high-res virtual texture!)
  • Filters and blurs can be applied on lit surfaces (although there would be some difficulties at cache-page boundaries), which could be useful for subsurface scattering effects.
  • If you ignore all the caching, all the calculations would be relatively constant, scaling nicely with the amount of geometry and lights.
The questions are, is it possible? and, is it fast enough?