Tuesday, February 28, 2012

Irradiance Volumes

So since my last post I've implemented Exponential Shadow Maps (not really that visible in these images in retrospect) and implemented Irradiance Volumes.
Here are the results (no lightmaps, only one dynamic light & an irradiance volume):





I was a little disappointed in the results from exponential shadow maps, it still takes a lot of tweaking with magic values to get a reasonable results, and I really hate magic values. That said I haven't done anything remotely intelligent with my camera frustum yet. Tweaking the near/far plane, cascaded shadow maps etc. will likely improve the mileage I've been getting so far.

I remember reading in one of the presentations from Bungie (forgot which one, sorry!), about Halo 3, in which they actually used a combination of exponential shadow maps and variance shadow maps and using the darkest value of the two shadow algorithms for each pixel, which just sounds plain scary to me (you'd need to tweak two sets of magic values at the same time! yikes!) but maybe it's not nearly as bad as it sounds. Hell, I might even try it eventually. That said, exponential shadow maps are a huge improvement over regular shadow maps; swapping the bias problem with a far more subtle light leakage problem.

Irradiance volumes are pretty cool, but they have their own sets of problems. Most of the time they work great, except when there are strong discontinuities between adjacent 3d texels and the light gets incorrectly blended between them. If you look at the first image with the red curtain, the light really shouldn't leak through it (it just happens to look really good in this particular situation). Higher resolution irradiance volumes help somewhat, but get more expensive quickly.

The placement of the samples is crucial and takes a lot of time since re-calculating the samples is slow. I haven't looked at clever ways to automate this, I have my doubts any automation will produce acceptable results and it sounds like a time-sink. I'm pretty sure using multiple volumes would be much better, this way discontinuities between areas can controlled in a much better way, but it would require me to implement deferred rendering first, something which I was planning to do anyway. I don't see how to do that without deferred rendering, at least not with any reasonable performance.

There's also a lot of improvement to be made in the sampling quality. Right now I just render the scene in 6 directions from every sample location and calculate the normalized averages of each side and store those. I do a couple of passes like this to simulate bounces, which probably just accumulates lots of errors. It looks surprisingly good though. This can obviously be done a lot better. Also the storage requirements are pretty ridiculous right now; 6 RGBA values per sample location. Right now my little prototype can handle this, but that's something that definitely needs a little bit of attention.

As for my post processing, I haven't implemented any bloom (I'm only going to implement something very subtle, I seriously hate over the top bloom) yet, but it's on my list. Another thing I still want to implement is some sort of SSAO or SSDO, which I'm pretty sure will improve the quality of the visuals a lot. (It'll bring out some details where the irradiance volume is way too low res) Also, if you look closely you'll see that I'm using subtle noise in my post processing in the darker areas to smooth out the gradients, like I mentioned in my previous post.