Just a random idea; what if light is cached in texture space using screen orientated vectors.
Basically just like the half life 2 "Basis for Radiosity Normal Mapping" vectors, only all vectors are orientated towards the screen to have more accuracy there.
The texture space cache would have to be updated eventually when the camera moves too much relative to the cached pixels, but since we're not storing a single direction we can interpolate (and slightly extrapolate) between the vectors to increase the time the specular reflections remain valid between motions. The further away pixels are, the longer they can remain cached. For 3D rendering both cameras can use the same data.