I mean considering that this is already an MMO most files do reside on the server that you’re logged into with only a small amount of local files being cashed for graphics now things like that. Essentially like this isn’t really a bad idea at all. And it’s probably one of the few uses of AI that I could see. However that being said Gemini overall is such a shitty AI assistant already that I have no doubts that a virtual AI assistant using Gemini on a video game
Make it a downloadable package that runs a local model and I think I’d be far more fine. Like, I think it’s a tacky gimmick, but at least on device it’s not hurting the environment
titanicx@lemmy.zip 2 weeks ago
MirrorGiraffe@piefed.social 2 weeks ago
I’m not too big on these topics and would like to understand. Is a local model less resource intensive?
In my mind, if every gamer runs a model that must be less efficient than a centralised one that has the perfect hardware setup and only lends out the resources needed for each slime or whatever.
I’m thinking that it of course would be better with a dedicated slime model than the entire Gemini monster but why is local better?
Cethin@lemmy.zip 2 weeks ago
I don’t know, but I’m willing to bet that economies of scale actually mean data centers are more efficient. This isn’t to say their use is justified, just that they’re able to take advantage of things a home computer can’t.
However, having to run it locally means it needs to be much more limited. This is doubly true if you want to run the game and the LLM at the same time. The LLM is easily able to consume all resources your system has available if you allow it to, which means the game won’t run well (if it runs at all). This limits the use so it can’t just be shoved everywhere and constantly running, like it could if it’s sent to a data center. It’s not more efficient, just less consumption.
SabinStargem@lemmy.today 2 weeks ago
On my system, I can play a RPG Maker game and use a 122b LLM at the same time, alongside to a podcast. A model in that parameter range takes up about 70gb of DDR4 RAM and 36gb of VRAM. However, it used to be that a 120b AI would take a larger footprint, bringing the system to the brink. The hardware requirements are going down, and the quality also increased, alongside speed. I believe when the next major sea change of hardware happens, AI will become very practical for gaming.
Cethin@lemmy.zip 2 weeks ago
Damn, your system is insane. Yeah, an RPG maker game is next to nothing compared to that. Still, Dragon Quest I think is 3D. It takes a lot more VRAM than RPG maker.
I have 16GB VRAM, which is a lot for most systems. That’s easily consumed by an LLM. Any model that doesn’t use at least that much tends to perform pretty poorly, in my experience. That’s not mentioning how much heat it generates while running, which has to be removed from the system or it’ll slow down. Even if your system can handle it, it heats up fast. It’s great when I need a heater running, but when I need AC my room gets warm quick.
MyNameIsAtticus@lemmy.world 2 weeks ago
Local runs on device, so no need to connect to a big data center that chugs lots of water and all those other problems. Of course, because it’s a smaller far tinier model it’s nowhere near as accurate, but especially for things like this you don’t really need a big accurate LLM model.
I think I also though I should warrant a disclaimer that I am a Software Developer, not a AI Developer. So there’s far less backing then from my perspective than someone who works with this stuff for a living
MirrorGiraffe@piefed.social 2 weeks ago
I’m also a sw engineer so we’re both guessing 😅
I’m guessing those dates centers use that water for cooling whereas most home computers run an electric fan. And furthermore they probably use less electricity per token as they want to maximize profits. I don’t have any numbers to back my hunch up but I’m pretty sure the environment would suffer more if everyone is running their own.
I probably missed a lot of factors such as what type of energy the centers run contra what average Joe runs etc.