Just a random thought.
I had thought about a deferred rendering / virtual texturing approach before where instead of storing the normal, albedo etc. in a g-buffer you'd only store the texture coordinate + texture derivatives (+ z position & surface normal, unless you're storing normals in worldspace which is possible for static environments with virtual texturing). But I quickly dropped that idea back then because you'd store roughly the same amount of data so it wouldn't gain you much. But this morning I was thinking... If you have a lot of overdraw and/or if you render lots of small triangles (especially if a lot of them are smaller than 2x2 pixels) it might pay to have a really simple shader at the g-buffer phase, and then do the actual texture lookups later on, when you're just rendering a single full-screen triangle. It might work, it might make no difference, it might be worse. I should definitely try it some day.