Thursday, June 17, 2010

Texture compression II

I've made some more progress on the compression code.
I have >6000 128x128 pages, created from Quake 4 textures that I'm using for testing purposes.
Right now the average page size is roughly 2kb.
There's one page of 10kb, and two more that are almost 9kb, other than that none gets higher than 6kb.
I could get more compression out of this, but not without degrading quality beyond where I would like to go.
Average decompression time is 8ms, but I haven't made any attempt at optimizing.
I can compress all the pages combined to 31x their original size.

I'm wondering how id software do their compression though..
According to their latest presentation (id Tech 5 Challenges: From Texture Virtualization to Massive Parallelization) they compress each page to about 2-6kb, but that's diffuse, normal + specular.

But I'm only compressing diffuse here.

I know they're using DCT, they even have some papers published about older versions of their compression routines, but so far I haven't been able to get it down to these sizes without seriously degrading quality.

Perhaps, because of all the real-time lighting etc., it's perfectly fine to really compress textures all the way down to the point where the artefacts are really obvious, simply because they won't be as visible in a scene as it would be in a 2d picture.

Perhaps, I'm nitpicking too much when it comes to quality.

I'm also using SSIM for texture quality assessments now.
It's not perfect, but better than PSNR.