Saturday, August 6, 2011

Carmack Quakecon 2011 keynote


So I saw the QuakeCon 2011 keynote by John Carmack..


... and I figured I'd jot down some notes:


80 degree field of view in Rage, pages might be loaded for entire 360 degree view?
Loading full pages that occupied 1 pixel on screen hurt a lot in Rage.

Single 4k x 4k texture wasn't enough.
Ended up using multiple 4kx4k textures on consoles.

Second buffer of cache, uses just about all memory on console, shared with audio.
Transcode pipeline comes after cache, so pages are not cached uncompressed.

Blu-ray is worse in latency than DVD.
Pages are cached on HD on consoles.

Carmack said better solution would've been split page cache in low-detail and high-detail pages.


1 thread reading from DVD
1 thread reading from HD
"additional threads (plural!) doing analysis of the feedback buffer"
"a bunch of threads doing transcoding"

HD-photo derivative compression, not using dct based compression.
30% better compression, requires twice the processing power.

Rage is running on intel hardware?!? (but not at 60hz)

Rage blurry close up, newer titles have more detail up close.

Biggest levels are 128kx128k, dynamic characters are 64kx64k.
Wasteland are 14 pieces, various sizes from 32kx32k to 128kx128k.

He mentions that "the wasteland doesn't fit into one 256 x 256",
Did he mean 256k x 256k?
Considering he specifically said that number I'm assuming that's some sort of maximum.
256k / 128 = 2k. GPU side page texture is 4k x 4k.
Are they using more page-tables?
Are they swapping page-tables (pieces) in and out according to the area/level you're in?
or maybe it's just not a hard limit, but rather a soft limit.



About 300 - 500mb per 'level'.
(My Q4 level conversion is 1.5gb, DXT compressed)

Source data giant 256000*256000 texture
64 gigatexels of source data.
Terrabyte of source art.

100 gigabytes of texture data (uncompressed) used in Rage.

Large ammount of profiling, removing pieces that aren't visible (roofs etc.)
(Clearly works better in outdoor levels, compared to my experiments with Q4)


This made me think of .. (not mentioned by carmack)

OnLive/Gaikai

The advantage of making a game purely for OnLive/Gakai is that the whole game runs on their servers and the images & audio are send to the player over the network, the player only sends their controller inputs.
Obvious latency issues aside, which can be helped somewhat by putting servers all over the world, this has a lot of advantages for publishers and developers. Piracy and cheating being impossible being one example, not being constrained by disk space or hardware is another.


A game like rage could run with full detail, maybe even with dynamic lighting, if it had a good server using an OnLive/Gakai approach. In fact, gameplay could be run on separate severs compared to the actual rendering, which could be done in a more render-farm sort of way.


The render servers could hold a large chunk of the texture data in memory and keep the rest on SSD.
Oddly enough multi-player / MMO's become more attractive than single player games because if all players play in the same world lighting can be cached into the pages for all players, allowing the render farm to separate rendering lighting from rendering camera views. The same goes for anything else that's shared between the players, like physics etc.


Scary to think about scaling this up to world of warcraft like sizes though, from a technical POV.