A thought popped up in my head this morning..
I'm wondering if the following scheme would work.
For all reflecting static objects the world would be rendered around it, into a paraboloid environment map.
Actually, only the part of the world facing the camera, so the paraboloid environment map would be aligned with the camera in the opposite direction.
These maps would be relatively small, maybe virtual texture page sized (128 x 128), and rendered from a specified point within the reflected object.
Instead of storing the colors in the environment map, we store the virtual texture texel coordinates, which are always unique.
Using this, we can then bounce between the environment maps a couple of times using the surface normal of the texel and the destination texel in the environment maps.
If a texel doesn't have an environment map, it would just use that texel.
After x bounces, it would just use some default color.
Obviously the texel coordinates would be an approximation, so some blur would have to be applied afterwards, and more blur would be required at grazing angles..
Would this work/look good enough? I don't know.
Would it be fast? Probably not, but I think it would beat ray tracing.
Obviously there would be a lot of cost in the form of vsd, draw calls and fill-rate.
Whoops, because of reflections you'd be able to see the back of a reflected object, which would require a second environment map to be rendered in the direction of the reflection.
Damn you reflection angles! Damn you to heck.
However, it might be possible to make some sort of simplified reflection graph and figure out which environment maps you'd need to render.
Which one to use and when would be more complicated however.
And it certainly won't help with performance.
Maybe objects could be pre-split into several environment maps, and we could then perform back-face / frustum culling etc. on these maps.