There was an interesting post that I was linked to on Reddit, supposedly from Assassins Creed dev.
I’ll quote it here:
“I been watching the fallout of the DLSS 5 video, and wanted to check in with with some game devs to check if I have been taking crazy pills, or if I have understood game dev incorrectly.
Games are not visuals, they are game mechanics and game loops skinned in visual interface. When we make games, we make all the things that work with our mechanism and loops, visually distinct and more importantly repeatable.
In assassins creed, all ledges that I can climb, look visually distinct from all other ledges. In most games, outlines and color is much more important, than what they look up close. They are used to identify what we are looking at, more than how realistic they look. These things are icons in the world, more than they are objects.
Light and Shadow are not just for visual pleasure, they are used to draw the eye towards objectives and where you should go.
In short, there is information in the visual representation of the game mechanics that are telling players what they should do and where they should go.
When I see video games processed through DLSS 5, I see stripped away game information, making games less playable, and more confusing. I could understand having this in a photo mode, but why on earth should we have this in any of our games, if we don’t know what it will change it to? Or if it even will remain consistent next time you look at it?
Will it remove the yellow paint on my assassins creed ledges, or perhaps only up-rez the rest of the assets, and make the yellow ledges stand out like a sore eye? Will it remove scars that are story relevant from an RPG Character? Will it smooth out a wall that is supposed to look like it can be destroyed? There are so many visual important things in games, that I know this thing won’t adhere to.
Did no one involved in making this video understand Game Design or Art Design?”
PlzGivHugs@sh.itjust.works 21 hours ago
From my understanding, it may be possible to work around some of this, since the program is meant to hook into the game in a number of different ways. Its very possible that an “importance” mask could be added as in input, for example. This wouldn’t fix everything, but would still give a way to separate game elements from environmental details.
That said, theres been so much focus on how it looks. IMO, its completely overblown, especially when all of this needs to be manually configued on a game-by-game basis. Devs can tweak the settings to their own preferences, and make things more or less extreme.
The part thats much more worthwhile of mockery is the fact that they’re demoing a consumer product on professional grade hardware, during a hardware shortage. They couldn’t even get the demo working on a high-end gaming PC, and they think this tech is worth advertising? That is the funny part of all this.
ech@lemmy.ca 19 hours ago
It’s wild that every defense of this garbage is “Just have devs spend even more time finetuning for this.” Yes, let’s double (or more) the workload of workers that are already overworked and crunched beyond reason, all for a “feature” that looks like garbage in it’s showcase demo and is so resource intensive that very few users will be able to utilize it, if they even want to.
PlzGivHugs@sh.itjust.works 18 hours ago
Its more an argument against the, “artisit’s intent” and “disrupting gameplay” points. As I said, the feature is dumb not because it “looks like AI”.
Do you have any evidence for this? Given whats been shown, this seems relatively easy to implement on the game dev side.
Quetzalcutlass@lemmy.world 18 hours ago
Even if implementing it turns out to be trivial, testing art assets for quality and consistency will be a nightmare. Especially if the underlying generative AI isn’t deterministic.
nightlily@leminal.space 15 hours ago
The inputs from everything Nvidia has said, are simply the final pixel colour values and motion vector information. It’s meant to sit in the same post-processing stack as the upscale. It’s effectively a screen-space post-processing filter over the final image. Nvidia have said that the artist controls are masking (blocking certain areas from it), intensity (so a slider value), and some kind of colour re-grading (since it destroys the original grading). It’s extremely limited.
cheat700000007@lemmy.world 14 hours ago
And Nvidia are full of shit judging from how it clearly changes geometry in the demos, women’s faces in particular
PlzGivHugs@sh.itjust.works 13 hours ago
If it is the same as DLSS 4 Super Resolution, it seems to use motion vectors, colour buffers, depth buffers, and camera information like exposure. That said, this might change, as, like I said, they’re showing off something they haven’t even got running on the target hardware. Its clearly not even close to being a finished product.