Thursday, June 18, 2009

Deferred Virtual Texture Shading 3

Just realized another good property of my "Deferred Virtual Texture Shading" idea:
  • Anti-aliasing is not a problem with this technique because all the shading occurs in texture space, and the rendering is 'simply' just rendering geometry with one (or two) textures ;)
  • Update: Also, if we render our lights in texture space, we can't use proxy meshes to only render light to the pixels that are visible to both the light and the camera. To reduce this disadvantage we could have some sort of octtree/hashed grid/whatever to determine which texture-pages are 'hit' by the light and only redraw those. If we also take page-mipping into account, and only redraw the part of the mip that holds the page we're touching we actually get the "the smaller the light is, the less pixels we process" advantage of deferred lighting back.
  • Update 2: Another interesting property is that smoothing the lighting over a surface to make it look as if it's more curved than it actually is would be much easier in texture-space..
I really need to allocate some time and try to implement this technique ...


  1. If I understand your idea correctly, rasterized edge aliasing will still be a problem, ie. the jaggies or stair step effect, but at least for that you have the option of using standard MSAA.

    But since lighting information will be fetched from a virtual texture there might be other aliasing problems when the resolution of the light texture and its screen coverage don't match up. This is probably fixed by the virtual texture's ability to adaptively fetch arbitrarily higher/lower resolution chunks as needed.

    This makes me think your idea is an extension of the venerable Surface Caching method from Quake, except the lighting is generated on the fly at arbitrary resolution.

    I suspect it would be technically very challenging to pull this off. Not only do you have to worry about reading from the virtual texture, but also writing out your generated lighting info to amortize the costs. I think it would be most useful for something like a slow moving sun light, where you can reuse the cached lighting information over a long time, and update it asynchronously. These are just idle speculation, since I'm not sure I know exactly how this would work. But its always good to explore the less traveled corners ;)

  2. "If I understand your idea correctly, rasterized edge aliasing will still be a problem"

    Since the last step is essentially identical to a forward renderer with one or two textures applied to each triangle, it should essentially have the same options as you would have with a forward renderer.

    "I suspect it would be technically very challenging to pull this off"

    Oh yes, i'm not even entirely convinced it would work (properly) at all! But it sure is fun to explore different ideas ;)


To you spammers out there:
Spam will be deleted before it shows up, so don't bother.