I can also see a lot of use in general for gaming! There might be a future where game assets are generated on the fly, dialogue and storylines are without artificial limits, no invisible borders in game worlds. The technology is useful, but not in the way those fools want to force it.
systemglitch@lemmy.world 1 day ago
I got deepseek to run short roleplaying adventures that are surprisingly fun and engaging. It’s an amped up choose your own adventur, so for this application, the future is bright.
Not a single other llm can do this in any way approaching acceptable.
And it still lies and makes shit up, but in a fantasy world, the can let it pass unless it is trying to rob me of experience lol.
When it can do long sessions and entire careers instead of detailed one offs it’ll have found its niche for me. Right now, it’s just a fun toy, prone to hallucinations.
I can’t believe people use these things for code…
Wildmimic@anarchist.nexus 1 day ago
Hule@lemmy.world 1 day ago
Yes, images where not every pixel is important. NPC-s going about their business. The traffic. The weather. Games will use it, I’m sure of it.
explodicle@sh.itjust.works 1 day ago
Fair, but compare that to the fun of an actual in-person TTRPG. It’s the main way I make new friends as an adult man.
5too@lemmy.world 1 day ago
That’s the thing though - with an LLM, it’s all “hallucinations”. They’re just usually close to reality, and are presented with an authoritative, friendly voice.
(Or, in your case, they’re usually close to the established game reality!)
merc@sh.itjust.works 1 day ago
This is the thing I hope people learn about LLMs, it’s all hallucinations.
When an LLM has excellent data from multiple sources to answer your question, it is likely to give a correct answer. But, that answer is still a hallucination. It’s dreaming up a sequence of words that is likely to follow the previous words. It’s more likely go give an “incorrect” hallucination when the data is contradictory or vague. But, the process is identical. It’s just trying to dream up a likely series of words.
OctopusNemeses@lemmy.world 12 hours ago
Before the tech industry set its sights on AI, “hallucination” was simply called error rate.
It’s the rate at which the model incorrectly labelled outputs. But of course the tech industry being what it is needs to come up with alternative words that spin doctor bad things to not bad things. So what the field of AI for decades had been calling error rate, everyone now calls “hallucinations”. Error has far worse optics than hallucination. Nobody would be buying this LLM garbage if every article posted about it included paragraphs about how its full of errors.
That’s the thing people need to learn.