Comment on A Google AI Watched 30,000 Hours of Video Games—Now It Makes Its Own
swordsmanluke@programming.dev 8 months ago
So… unlike Stable Diffusion or LLMs, the point of this research isn’t actually to generate a direct analog to the input, in this case video games. It’s testing to see if a generative model can encode the concepts of an interactive environment.
Games in general have long been used in AI research because they are models of some aspect of reality. In this case, the researchers want to see if a generative AI can learn to predict the environment just by watching things happen. You know, like real brains do.
E.g. can we train something that learns the rules of reality just by watching video combined with “input signals”. If so, it opens up whole new methods for training robots to interact with the real world.
That’s why this is newsworthy beyond just “AI Buzz” cycle.