Monday, February 24, 2014

Voxelization meshes

Just a random thought;
If the future is going to be using voxelization to perform real time indirect lighting / glossy reflections etc. (using techniques such as voxel cone tracing). And for a moment ignoring that this might not be ready for prime time just yet. Wouldn't it then make sense to have lower resolution 'voxelization meshes', just like we have different meshes for collision detection? Since the voxelized world representation is an approximation anyway? It might save a lot of cycles (at the cost of memory) ..

On the other hand, the static meshes could probably be pre-voxelized, and the most dynamic meshes would be skinned meshes .. which would require you to skin the same character twice with two different meshes? Might still be worth it if the voxelization representation is simple enough ..

Monday, January 27, 2014

VR head movement momentum

So about 2 weeks ago I went to Steam DEV Days and had a blast! Valve's presentations where certainly eye opening on some issues. Can't wait until they put the videos only so I can see the presentations I wasn't able to attend, I'm sure they'll be great too.

Anyway, in one of the presentations (I think it was "wild west of VR") the guys behind AaaaaAAaaaAAAaaAAAAaAAAAA talked about their Oculus Rift version of the game and how they discovered this interesting trick where they would slowly tilt the world which caused the player to automatically compensate by moving their heads in the opposite direction. This made the players feel like they where falling downwards. Also interesting that they only had to do a 45 degree rotation to give the impression of a 90 degree turn so that the players wouldn't get neck pain while playing the game. Players would swear that they where looking straight down even though they weren't.

In other VR presentations they would talk about sideways and backwards movements making players feel sick etc. (which is sort of common knowledge already I guess)

After these presentations I was suddenly wondering... maybe we've got this all wrong .. what if it's not the movements themselves which make people feel sick? (or at least not completely)
What if the problem is that we're not simulating momentum for the head when doing these movements?
I mean if you move sideways in real life no way that your head will remain perfectly still, it'll bob slightly to the left or right (depending in which direction you're moving). Maybe our brain is expecting this and when these natural movements are missing people get sick?

Unfortunately this is very hard for me to test as I seem to be completely immune to motion sickness :-/

Friday, December 13, 2013

Screen orientated basis vectors

Just a random idea; what if light is cached in texture space using screen orientated vectors.
Basically just like the half life 2 "Basis for Radiosity Normal Mapping" vectors, only all vectors are orientated towards the screen to have more accuracy there.

The texture space cache would have to be updated eventually when the camera moves too much relative to the cached pixels, but since we're not storing a single direction we can interpolate (and slightly extrapolate) between the vectors to increase the time the specular reflections remain valid between motions. The further away pixels are, the longer they can remain cached. For 3D rendering both cameras can use the same data.

Tuesday, December 3, 2013

Oculus Rift & virtual texture space lighting

I was reading a thread on the oculus rift forums where some people where experimenting with super-sampling when rendering the oculus Tuscany scene in unity, which really improved the graphics quality. So some people where experimenting with rendering the scene 3-4x 1080p ... ouch

So this made me think of that old idea of mine where you would render the lighting of a scene in the page buffer texture of a virtual texture setup ..
For the oculus rift, VR and 3D in general, this would have following benefits:

  • Lighting is (partially) decoupled from the camera (don't need to do all the lighting twice)
  • Lighting can be (partially) cached, Diffuse as long as nothing moves in between it and the light
  • Lighting can be updated at different rate than the frame rate if / when necessary..
  • Diffuse and specular lighting could, perhaps, be updated at different rates? (would mess up physically based materials though, unless lighting normalization is done during the final pass)
  • Specular could have a different update frequency depending on distance
  • Specular could be calculated for both camera's at once, materials would only have to be read once, specularity could perhaps be compressed as 1 color + 2 intensities?
  • Lighting resolution can be decoupled from the actual resolution of the final rendering of the scene (which is fine because we generally want smooth shadows anyway)
  • Specular will obviously look better in texture space because of the regular normal map aliasing issues.
  • The final rendering pass, which is done twice, would be relatively simple and cheap.

Note that you don't need unique texturing for this to work, just unique texture addressing.

Tuesday, September 3, 2013

Variable eroded navigation meshes

Another random idea; what if navigation meshes where combined with distance fields to make them work well with entities of all kinds of varying sizes?
Instead of having an eroded navigation mesh that only works for one size entity, and presumably needing several copies for different sizes .. (unless there are some other tricks I don't know about)

Wednesday, August 28, 2013

Material masking in G-buffer?

Welp, went on a long vacation to California & Nevada with my family and when I came back a shitload of interesting papers where released at Siggraph 2013, Carmack has left iD (sort of) to start to work at Oculus, Balmer is going to leave Microsoft (to the relief of .. everyone), NiN is back (YES!) and .. Ben Affleck is the next Batman.

What??

I'm pretty sure I accidentally stepped through a wormhole there.
I can't leave you guys alone for a second, can I?

Bonus points for those who can tell me who's company's logo was based on this mountain:



Anyway, some random thoughts:

This paper had an interesting idea that was new to me .. take pre-defined materials (such as iron, aluminum, dust) and combine them together with masks. This works really well with physically based shading where your materials might very well be measured real world materials (or at least based on them).

So I couldn't help but wonder .. if materials are build up like this .. why not just store the masks together with a material id per mask. The masks are much easier to compress (especially if they don't need to be very precise and can be low resolution) and the final textures could be generated at run-time .. maybe even in the pixel shader if the base materials are not texture based (but quality might suffer too much)
Obviously this will only work if the number of material layers are within reason.

The base materials could have pre-generated mipmaps, avoiding the issue of needing to generate mipmaps for your final texture after combining the masks. Which is nice since you don't want to do the whole "turn normals into variance into roughness for a lower res mip" and "fixing up your transparency by estimating if your texture is still roughly as transparent as your higher res mip" for your mipmap chain at loading time.

So what if you could store the masks in your G-buffer instead of low quality specular / diffuse / normal / roughness? Still would make it hard to add fine details to your materials though, especially in the normal map. The surface normal would still need to be stored in the G-buffer, so perhaps normals could be stored directly and all other material properties would be done through masks? Roughness could probably still be stored per material .. Material id's would still need to be stored as well (one for each mask), so it might end up not being a win storage space wise.

Of course, if all the lighting would be rendered in (virtual) texture space then you wouldn't need a G-buffer and you could just combine your masks into a (virtual) texture cache when you need it.