I’ve kinda just had a thought, I don’t know if it’s horrible or not, so you tell me.
- 3d terrain in video games is commonly starts with a fractal perlin noise function.
- This doesn’t look very good alone, so some passes are employed to make it look better, such as changing the scale.
- One such technique is employing hydraulic erosion with a heavy gpu simulation, to create riverbeds and folds in terrain.
- However hydraulic erosion is VERY slow, and is as such not viable for a game like Minecraft that works in real time. It also doesn’t chunk well.
But what if it didn’t have to? Why not train something like a diffusion image model off of thousands of pre-rendered high quality simulations, and then have it transform a function like fractal perlin noise? Basically “baking” a terrain pass inside a neural network. This’d still be slow, but slower than simulating thousands of rain droplets? It could easily be deterministic to loop across chunk borders too. You could even train off of real world GIS data.
Has this been tried before?
Munkisquisher@lemmy.nz 2 hours ago
The current leader in this space that we use in the film industry (also heavily in games) is Gaea quadspinner.com it’s a node based erosion engine that lets you start with any height field, GIS, something you’ve sculpted, or even just a few shapes mashed together.
It’s waaaay beyond just layering a few noises together and is so much fun to use. Designed to be procedural, but also art directable, as we have to match artwork and reference of real world locations.
There’s probably some of the nodes that would benefit from being encoded in a ML framework, but not a single step to do it all.