So this made me think of that old idea of mine where you would render the lighting of a scene in the page buffer texture of a virtual texture setup ..
For the oculus rift, VR and 3D in general, this would have following benefits:
- Lighting is (partially) decoupled from the camera (don't need to do all the lighting twice)
- Lighting can be (partially) cached, Diffuse as long as nothing moves in between it and the light
- Lighting can be updated at different rate than the frame rate if / when necessary..
- Diffuse and specular lighting could, perhaps, be updated at different rates? (would mess up physically based materials though, unless lighting normalization is done during the final pass)
- Specular could have a different update frequency depending on distance
- Specular could be calculated for both camera's at once, materials would only have to be read once, specularity could perhaps be compressed as 1 color + 2 intensities?
- Lighting resolution can be decoupled from the actual resolution of the final rendering of the scene (which is fine because we generally want smooth shadows anyway)
- Specular will obviously look better in texture space because of the regular normal map aliasing issues.
- The final rendering pass, which is done twice, would be relatively simple and cheap.
Note that you don't need unique texturing for this to work, just unique texture addressing.