The game is rendered at a lower resolution, this saves a lot of resources.
Then dedicated AI cores or even special AI scaler chips get used to upscale the image back to the requested resolution.
I get that much. Or at least, I get that’s the intention.
This is a fixed cost and can be done with little power since the components are designed to do this task.
This us the part I struggle to believe/understand. I’m roughly aware of how resource intensive upscaling is on locally hosted models. The necessary tech/resources to do that to 4k+ in real time (120+ fps) seems at least equivalent, if not more expensive, to just rendering it that way in the first place. Are these “scaler chips” really that much more advanced/efficient?
Further questions aside, I appreciate the explanation. Thanks!
chicken@lemmy.dbzer0.com 2 months ago
lol, trying to hedge against downvotes from the anti-AI crowd?