Generative AI is a name for some ways you can use AI, not for its architecture.
There’s space to discuss if DLSS is it or not. But your argument is baseless.
Comment on Keep it off please!
FishFace@piefed.social 1 day agoI hate videos for information like that. I’d read an article though.
But from your description, DLSS <5 was genAI - transformer models are the backbone of genAI. There’s certainly the possibility that DLSS 5 is a whole other bucket of crabs but idk.
Generative AI is a name for some ways you can use AI, not for its architecture.
There’s space to discuss if DLSS is it or not. But your argument is baseless.
The base for it is that it is generating pixels - and entire frames.
The difference between DLSS 5 and <5 seems quantitative, not qualitative.
FiniteBanjo@feddit.online 1 day ago
It’s a very visual topic so using a visual medium to learn about it is ideal.
Again, I feel like it’s disingenuous to compare using pixels to predict local pixels accurate to simply using a higher resolution, to generating an entirely different image every frame. One of them sounds no different than using certain filters or post process, the other sounds like slop ass AI.
ZombiFrancis@sh.itjust.works 1 day ago
The problem stems from the term ‘GenAI’. These systems use math to predict things. There are a lot of valid mathmatical calculations to predict out there. Rendering lighting is one of them.
Human language and imagery isn’t one of them, which is what idiots have been trying to funnel through these models.
FishFace@piefed.social 1 day ago
The effect looks like a filter or shader. I’ve seen the comparisons.
FiniteBanjo@feddit.online 1 day ago
The fuck are you talking about? DLSS 5 has been adding wrinkles and entire facial features, in one demo it kept accidentally adding wheels to cars driving in the background. It doesn’t look like a filter or shader, it looks like ass slop.
FishFace@piefed.social 1 day ago
I looked at the still comparisons on the nVIDIA article.