Technically, at least on vulkan, these things can be inferred or intercepted with just an injected layer, though it’s not trivial. If you store a buffer history for depth you can fairly accurately compute an approximation of actual mesh surfaces from the pov of the view. But that isn’t the same as real polygons and meshes that the textures and all map to… pretty sure you can’t run that pipeline real time even with tiled temporal ss. Almost definitely works on the output directly, perhaps some buffers like motion vectors and depth for the same frame that they’ve needed since dlss2 anyway. But pretty suspect to claim full polygons, unless running with tight integration from the game itself, even then the frame budgets are crazy tight as it is, nevermind running extra passes on that level
SkunkWorkz@lemmy.world 2 hours ago
Probably not meshes since it is way too expensive. But these guys write the GPU drivers, so they of course have access to the different frame buffers and textures buffers and light source data. So just from depth and normal map data you can get a good representation of geometry. Like deferred rendering lights the scene from the 2D images in the G-Buffer not geometry.