Comment on Nvidia Announces DLSS 5, and it adds... An AI slop filter over your game
popcar2@piefed.ca 8 hours agoOne is upscaling the image while preserving it as much as possible, the other is applying a filter to try and “enhance” it. What’s hard to get?
Ledivin@lemmy.world 8 hours ago
How is “upscaling while preserving it” not literally the exact same philosophy as “enhance by applying a filter?” You just don’t like the specific filter, it’s very literally the same process.
Nibodhika@lemmy.world 7 hours ago
Because a pixelated circle being upscaled is a circle, but a pixelated circle being turned into a high definition pie is no longer a circle, and that’s especially problematic if the circle was just a cross hair or some other random circle like thing the AI thought was meant to be a pie.
Yes, both things are the same, but that’s like saying you had a tiny spider in your house and you were okay because it killed mosquitoes in your house, so you should be okay with having a colony of bats since they are also animals and eat mosquitoes. Yes, both are the same, but the scales and the amount of intrusion are completely different.
grue@lemmy.world 4 hours ago
If your training data has a pixelated circle as an input and a circle as output, your neural network will “upscale” your pixelated circle to a circle. If your training data has a pixelated circle as input and a high definition pie as output, your neural network will “upscale” your pixelated circle to a high definition pie. It’s the same algorithm in both cases.
heavyboots@lemmy.ml 7 hours ago
Current DLSS intent: We can only render this at like 720p with enough frames, so let’s do that and use AI anti-aliasing tricks so that when we present it at 4k, none of the jaggies are visible on-screen like they would be with raw 720p upscaling.
DLSS5 intent: Using our pile of stolen artwork neural net that we can now render at 60fps+ let’s “reimagine” the entire look of the game as we present it on screen, even if it was already running at 4k just fine.
zaphod@sopuli.xyz 7 hours ago
Ideally you’d have a DLSS-like system trained specifically trained for only one game instead of a general system. Then you can train it on 4k with highest settings and you should get something that doesn’t mess with the style of the game.
slybebop@sh.itjust.works 5 hours ago
You’re describing what DLSS 1.0 was I believe
heavyboots@lemmy.ml 7 hours ago
Yep. Maybe it could actually be “modules” that the individual devs submit with their game, essentially.
ricecake@sh.itjust.works 7 hours ago
… How if flying a spaceship different from driving a car? They’re both controlled applications of kinetic energy to move people or objects.
At the end of the day, it’s all a pile of transistors and the only thing that is of import is the intent behind usage.
In one case it’s saying you can use a neural net to take something rendered at resolution A/4 and make it visually indistinguishable from the same render at resolution A.
The other is rendering something and radically changing the artistic or visual style.
Upsampling can be replicated within some margin by lowering framerate and letting the GPU work longer on each frame. It strives to restore detail left out from working quicker by guessing.
You cannot turn this feature off and get similar results by lowering the frame rate. It aims to add detail that was never present by guessing.
Upsampling methods have been produced that don’t use neural networks. The differences in behavior are in the realm of efficiency, and in many cases you would be hard pressed to tell which is which. The neural network is an implementation detail.
In the other case, the changes are more broad than can be captured by non AI techniques easily. The generative capabilities are central to the feature.
Process matters, but zooming out too far makes everything identical, and the intent matters too. “I want to see your art better” as opposed to “I want to make your art better”.