Friday, August 14, 2009

Virtual Texturing part 1.

Yay! I finally managed to liberate a little time for me to work on virtual texturing!

Thinking it would help me avoid worrying about additional borders i used a texture-array instead of a large texture for my physical page-cache texture.

(Edit: My test textures just happen to be a 'best case' and with alternating border colors between pages an aliased edge is actually visible, so additional page-borders are still necessary)

Each layer in the texture-array (255 max) is a single page, each page is 128x128, which would give me a cache of 255 x 128 x 128 pixels.

Right now the 'virtual texture' is small enough that it fits completely in video memory, so it's not exactly 'virtual', there's no readback yet either.

There's already a page lookup table however.

The next step would be to readback which pages are visible, upload them and update the page lookup table.

Here's a short video showing the blending between the pages

Yes Yes no fancy graphics or even interesting geometry, I'm on a tight time schedule here people! ;)

While working on this i did realize that the rage screenshot in my last post has something odd in it..

Before i only noticed that all the pages where just nice and square and that they had a nice locality to them.

What i failed to notice, and what i notice now, that it's just plain weird that there's just no blending or any transitions between what i assumed where mip-maps!?

Maybe they're showing pages at the highest resolution? And the ones in the back are bigger because some pages just happen to have a larger geometric area assigned to them? And it just, by chance, looks like some sort of weird rough mipmapping?

8 comments:

  1. They explain the lack of mipmap blending in this presentation:
    http://s09.idav.ucdavis.edu/talks/05-JP_id_Tech_5_Challenges.pdf
    around slide 16/17

    Their virtual texture implementation does only bilinear blending. Instead of doing trilinear they continuously blend between pages at different resolutions as they are requested. Its sort of like pre-computing tilinear blending asynchronously, as opposed to per-pixel at render time. I think the benefits are less border pixels for every page and less fetches from the VT. It only works since every texel on screen should be unique. I wonder if that means every instance of a dynamic object has a unique VT space allocated.

    ReplyDelete
  2. May I ask how you are doing trilinear filtering? AFAIK there are two ways. Either the one suggested by Barrett (the tile cache has 2 mipmap levels and you let the hardware perform the filtering) or you can do it manually in the shader, by reading two pixels from the indirection table (current mipmap level and the next one), reading two bilinear samples from the tile cache and perform the lerp in the shader. Unfortunately in our case (we're using VTs for terrain rendering) we can't afford generating data for the second tile cache mipmap level (Barrett's approach) so we are using the second method, but only for the diffuse texture. Normalmap and glossmap tile caches are bilinear filtered. I said "generating data" because currently we aren't streaming the VT from disk. We are generating the data in another thread using software rasterization (it's a bit easier than it sounds if you think it's for a heightmap).

    As for the tile borders. Are you sure you can avoid them completely? Don't you need at least one pixel border to avoid artifacts between tiles? I was thinking about the same thing (2D texture array without borders) because it can save some time from the rasterizer, but I haven't concluded if it would be better or worse visually. I guess I'll have to test it and see.

    ReplyDelete
  3. @About: even with no blending between mip-maps, you'd expect a hard edge between those page-squares, and you wouldn't expect it to align exactly with -every- page border on screen.

    @HellRaiZer: For trilinear filtering i calculate (an approximate to) the current mipmap level (which is floating point value) and use the ceil()/floor() to get two samples, and then blend between them using the fract() of the mipmap level. It takes *a lot* more shader code to do trilinear compared to bilinear. I've only spend maybe a day on all this, so i'm still experimenting somewhat.
    As for the tile borders, i thought about it some more and then went back and did a little experiment, and i'm afraid tile borders are still necessary with texture arrays.

    My mistake was that i used blue borders around every tile on all sides, which is a special case which always comes out correct. If i don't put a border around every edge, and have different alternating colors, then a aliased edge appears between the tiles.
    Which, when i think about it makes sense, considering i'm clamping to the edge of the texture layer and it can't possibly blend beyond it.

    This makes me think that using texture-arrays is probably a worse way of doing things, because it also makes it more complicated to do things like rendering to the texture-cache (which i was planning to do)

    ReplyDelete
  4. Just a random thought: you know those texture seams i'm getting between texture pages? Since they look exactly like 'regular' aliased edge, maybe simply using MSAA would get rid of them ...
    Kind of a hack, but if it looks good enough, it would remove the requirement of borders on texture pages.

    ReplyDelete
  5. Thanks for the info. As i mentioned above i'm doing the same thing for trilinear filtering.

    May i ask what you mean with "rendering to the texture-cache"? Are you going to use the GPU to generate tile data at run-time? Forgive if i'm wrong, but unless you are going to generate the data procedurally (e.g. no textures, just math), i don't think it's worth using VTs. E.g. if you are going to load all the tileable textures and decals which are used by the terrain, in order to generate the tiles for the VT, you are already consumed too much memory to make them worth it. I was under the impression that one of the advantages of VTs was the constant memory requirements, independent of how much detail you have on the texture.

    Either way, keep experimenting. I'm always interested in reading more on the subject.

    ReplyDelete
  6. No I was referring to an idea of mine (which I blogged about before) to use the tile/page-cache (in video memory) as a sort of deferred rendering style 'g-buffer'.

    It would basically be doing deferred rendering directly into a virtual texture..

    I don't have any particular high hopes that it'll actually work well, but it would have some interesting properties, and it's a fun thing to try.

    ReplyDelete
  7. Maybe MSAA would help in the case of bilinear filtering (or trilinear in the shader; it's still bilinear for the tile cache texture). But with anisotropic filtering the artifacts might get more visible and the required level of MSAA might be too high to hide them. To say the truth i haven't tested it, so if you do i'd be glad to hear the results :)

    ReplyDelete
  8. Just a quick update. I haven't had much time lately to work on this, but i will soon.
    As for the MSAA idea, it doesn't work. I always thought of MSAA as supersampling, but ofcourse it only supersamples along the edges of triangles ...

    ReplyDelete

To you spammers out there:
Spam will be deleted before it shows up, so don't bother.