I recently played a game where people found immortality and each individual just lived in their own personal virtual reality for thousands of years. It’s kinda creepy seeing the recent advances in technology today lining up to that, minus the immortality part.
Comment on OpenAI introduces Sora, its text-to-video AI model
Vex_Detrause@lemmy.ca 8 months ago
Imagine VR giving an AI generated world. It would be a Ready Player One in irl.
AgentGrimstone@lemmy.world 8 months ago
nossaquesapao@lemmy.eco.br 8 months ago
What game was that?
AgentGrimstone@lemmy.world 8 months ago
It’s a spoiler to reveal the game so…
SPOILER: Sorry, I don’t know how to do spoiler tags on this app but I’m referring to the antagonists in horizon forbidden west. Here’s another sentence just to help hide the game for anyone scrolling by.
Toribor@corndog.social 8 months ago
The compute power it would take to do that in realtime at the framerates required for VR to be comfortable would be absolutely beyond insane. But at the rate hardware improves and the breakneck speed these AI models are developing maybe it’s not as far off as I think.
Blue_Morpho@lemmy.world 8 months ago
An Ai generated VR world would be a single map environment generated in the same way you wait at loading screens when a game starts or you move to an entirely new map.
A text to 3D game asset Ai wouldn’t regenerate a new 3D world on every frame in the same way you wouldn’t ask AI to draw a picture of an orange cat and then ask it to draw another picture of an orange cat shifted one pixel to the left if you wanted the cat moved a pixel. The result would be totally different picture.
Toribor@corndog.social 8 months ago
I think we’re talking about different kinds of implementations.
One being an ai generated ‘video’ that is interactive, generating new frames continuously to simulate a 3d space that you can move around in. That seems pretty hard to accomplish for the reasons you’re describing. These models are not particularly stable or consistent between frames. The software does not have an understanding of the physical rules, just how a scene might look based on it’s training data.
Another and probably more plausible approach is likely to come from the same frame generation technology in use today with things like DLSS and FSR. I’m imagining a sort of post-processing that can draw details on top of traditional 3d geometry. You could classically render a simple scene and allow ai to draw on top of the geometry to sort of fake higher levels of detail. This is already possible, but it seems reasonable to imagine that these tools could get more creative and turn a simple blocky undetailed 3d model into a photo-realistic object. Still insanely computationally expensive but grounding the AI with classic rendering could be really interesting.