Comment on Indie Game Awards Disqualifies Clair Obscur: Expedition 33 Due To Gen AI Usage

<- View Parent
AnarchistArtificer@slrpnk.net ⁨5⁩ ⁨days⁩ ago

I’m not so much talking about machine learning being implemented in the final game, but rather used in the development process.

For example, if I were to attempt a naive implementation of procedurally generated terrains, I imagine I’d use noise functions to create variety (which I wouldn’t consider to be machine learning). However, I would expect that this would end up producing predictable results, so to avoid that, I could try chucking in a bunch of real world terrain data, and that starts getting into machine learning.

A different, less specific example I can imagine a workflow for is reinforcement learning. Like if the developer writes code that effectively says "give me terrain that is [a variety of different parameters], then when the system produces that for them, they go “hmm, not quite. Needs more [thing]”. This iterative process could, of course, be done without any machine learning, if the dev was tuning the parameters themselves at each stage, but it seems plausible to me that it could use machine learning (which would involve tuning model hyperparameters rather than parameters).

You make a good point about procedural generation at runtime, and I agree that this seems unlikely to be viable. However, I’d be surprised if it wasn’t used in the development process though in at least some cases. I’ll give a couple of hypothetical examples using real games, though I emphasise that I do not have grounds to believe that either of these games used machine learning during development, and that this is just a hypothetical pondering.

For instance, in Valheim, maps are procedurally generated. In the meadows biome, you can find raspberry bushes. Another feature of the meadows biome is that it occasionally has large clearings that are devoid of trees, and around the edges of these clearings, there is usually a higher rate of raspberry bushes. When I played, I wondered why this was the case — was it a deliberate design decision, or just an artifact of how the procedural generation works? Through machine learning, it could in theory, be both of these things — the devs could tune the hyperparameters a particular way, and then notice that the output results in raspberry bushes being more likely to occur in clusters on the edge of clearings, which they like. This kind of process would require any machine learning to be running at runtime

Another example game is Deep Rock Galactic. I really like the level generation it uses. The biomes are diverse and interesting, and despite having hundreds of hours in the game, there are very few instances that I can remember seeing the level generation being broken in some way — the vast majority of environments appear plausible and natural, which is impressive given the large number of game objects and terrain. The level generation code that runs each time a new map is generated has a heckton of different parameters and constraints that enable these varied and non-broken levels, and there’s certainly no machine learning being used at runtime here, but I can plausibly imagine machine learning being useful in the development process, for figuring out which parameters and constraints were the most important ones (especially because too many will cause excessive load times for players, so reducing that down would be useful).

Machine learning certainly wouldn’t be necessary in either of these examples, but it could be something that could make certain parts of development easier.

source
Sort:hotnewtop