So the last couple of days I've been playing around with texture compression again, just to see how far I could take it using wavelet compression.
I'm pretty pleased with the results. It's not very fast, but then again I haven't made any attempt to optimizing this yet.
I don't think I can get it as fast as DCT, but I can definitely get far more compression for the same quality.
Just to be able to measure my results more accurately I've created a tool which crawls through my page textures, compresses them, and measures the difference between the compressed/ and the original texture and determines the compressed size.
I should note that the third displayed bitmap in the tool is the exaggerated difference between the two textures.
The cool thing about wavelet compression is that it implicitly has it's mip-map encoded within it.
With virtual texturing you should always permanently cache your lowest resolution mip-maps, which means that if you use wavelet compression that you already have part of the compressed page in memory.
The idea is to use the mip maps that are already in memory to decompress the higher resolution mip maps, which helps decrease the amount of data that needs to be loaded from disk.
(Something I'm not yet doing right now)
Of course this will only be useful if the decompression can be made fast enough.
Texture page size is 128 x 128, 'diffuse' includes alpha channel here.
Noticed that the textures that compress the worst are simple greyish textures with alpha.
It seems that these textures are using pre-multiplied alpha and as such have each color multiplied with their alpha.
This causes lots of variability in the color channels, even in the areas which you can't really see, which hurts compression.
The best solution would be to pre-multiply the alpha at load time, and not pre-multiply it on disk. For now I'm setting all the alpha==0 pixels to the average color, and this seem to work fairly well.
After interpolating the average color with the actual pixel color using the alpha as the interpolation factor, the compression ratio for the worst cases halved!
Obviously this shouldn't be done in production code, but it does show that non pre-multiplied alpha textures compress better than pre-multiplied alpha textures, which means that pre-multiplying the alpha should be done after decompression.
Also, can't help but notice that difference images kind of look like edge detection filters, which might mean sharpening the image with a filter might increase the overall quality of the images (maybe).
I could definitely use some better down/up scaling code for the Co/Cg channels.
(I'm using YCoCg internally as a color representation)