Small models could be run locally and even incorporated into the local game code without needing to use a bit company’s AI, if they wanted to.
etchinghillside@reddthat.com 18 hours ago
Are you willing to put in an API key and pay money for interactions with an LLM?
hesh@quokk.au 18 hours ago
AA5B@lemmy.world 16 hours ago
There are models that can run on raspberry pi
lmmarsano@lemmynsfw.com 18 hours ago
Is an API key necessary? Pretty sure there are local LLMs.
SGforce@lemmy.ca 18 hours ago
They would increase requirements significantly and be generally pretty bad and repetitive. It’s going to take some time before that happens.
lmmarsano@lemmynsfw.com 17 hours ago
Would it? Game developers can run anything on their own servers.
hayvan@piefed.world 17 hours ago
That would be crazy expensive for the studios. LLM companies are selling their services at a loss at the moment.
Pika@sh.itjust.works 15 hours ago
games already have pretty extensive requirements for function, one model running for all NPC’s wouldn’t be that bad i dont think. it would raise ram requirements by maybe a gig or 2 and likely slow down initial loading time as the model has to load in. I’m running a pretty decent model on my home server which does the duties of a personified char and the CT im running ollama on only has 3 gigs allotted to it. Most of the work is on the GPU anyway.
I think the bigger problem would be testing wise that would be a royal pain in the butt to manage, having to make a profile/backstory for every char that you want running on the LLM. You would likely need a boilerplate ruleset, and then make a few basic rules to model it after. But the personality would never be the same player to player nor would it be accurate, like for example I can definitly see the model trying to give advice that is impossible for the Player to actually do as it isn’t in the games code.