The first one "Wavelet image compression - IƱigo Quilez" is a nice article about wavelet compression, which makes me think all the mip maps of a texture page could be compressed together.
This would mean that lower resolution mips would need to be build up dynamically from several pages, which is not necessarily a bad thing because a lower resolution mip's texture area might not be fully utilized anyway.
However, you don't want to require too many compressed pages to be in memory at the same time when you look at a very low resolution page, so there needs to be some sort of trade of.
The very lowest mips should be pre-cached in memory anyway, so that you always have something to display when you can't load in your pages fast enough.
The other link "Bump Mapping Unparametrized Surfaces on the GPU - Morten S. Mikkelsen" is very interesting.
Bryan McNett actually sums it up:
The guy who wrote this paper sits five feet away from me. The paper doesn't say so explicitly, but this finally makes normal maps obsolete for games. Implement the paper, and you can replace 2-channel normal maps with 1-channel height maps. You can also throw away all per-vertex tangent/binormal data. Cool stuff, if you ask me!
The primary flaw of the technique is that, since the derivative of the height map is taken with linear-filtering texture sampling hardware, when magnified it looks like "nearest neighbor" filtering. Fixing this requires adding bicubic-filtering to the hardware.
This can actually be used as a texture compression of sorts, by rendering the generated normal map into the normal page cache texture. The linear filtering won't be an issue there.
Of course this requires per pixel position & per pixel normal, something I would need to do lighting into the page cache anyway.