Sunday, January 29, 2012

Physical Based Rendering

Lately I've been playing around with Physical Based Rendering (yes, I'm late to the party). It took me a while to get it working because I kept getting weird results that looked more like caustics than surface roughness. I checked the math again and again, looked up as many source materials I could find to compare them to my implementation (just in case there where some errors in some articles), but couldn't see what could cause these weird artifacts. The thing is, when I rendered the light on a sphere it looked perfect, but when I rendered it from the inside of a box it looked completely wrong! So I started to check all my inputs and they all seemed fine ... until I discovered that my lighting position was inverted. Wut? How the hell? It turned out that the wrong light position caused my half way vector to be, well, something else entitely. And that combination made it all look plausible most of the time and completely threw me off course.

I'm currently using the GGX microfacet distribution, from the Microfacet Models for Refraction through Rough Surfaces paper. Here's what it looks like with some random textures and lightwave models borrowed from Doom 3 (don't sue me!) for testing purposes.



Some random thoughts about gloss / roughness maps & normal maps

While I was working on all this I realized that while roughness maps describe surface detail at the micro level, normal maps describe surface detail at the macro level. This made me think that, when generating normal map mip-maps, that lost normal details at lower resolution could be put in the gloss map instead. Of course this probably would work better with an anisotropic BRDF (directional-roughness) compared to an isotropic BRDF (uniform-roughness). Since both of them seem to be connected, it may imply that there might be a way to combine the two in a single thing. ("anisotropic normal maps"? lol)

This would probably work a lot better (especially with isotropic BRDF's) if you generate normal your maps from heightmaps, and create the gloss and normal maps mip maps from the mip-maps of the heightfield. The roughness would be calculated from all the height variation within a lower resolution mip pixel (compared to the highest resolution mip), while the normal would simply be calculated from the lower resolution mip directly. Maybe some sort of min-max mipmap approach could be used for the roughness. Of course you'd still have to take into account that the further away you are from a gloss map, the smoother it'll look. (disclaimer; I haven't actually tried any of this)