Yes, you’re anthropomorphizing far too much. An LLM can’t understand, or recall (in the common sense of the word, i.e. have a memory), and is not aware.
Those are all things that intelligent, thinking things do. LLMs are none of that. They are a giant black box of math that predicts text. It doesn’t even understand what a word is, orthe meaning of anything it vomits out. All it knows is what is the statistically most likely text to come next, with a little randomization to add “creativity”.
spankmonkey@lemmy.world 5 days ago
Yes, the tradeoff between constrained randomization and accurately vomiting back the information it was fed is going to be difficult as long as it it designed to be ingeracted with as if it was a human who can know the difference.
It could be handled by having clearly defined ways of conveying whether the user wants factual or randomized output, but that would shatter the veneer of being intelligent.
nectar45@lemmy.zip 5 days ago
It probably needs a secondary “brain lobe” that is responsible for figuring out what the user wants and adjusting the nodes accordingly…abd said lobe needs to have long term memory…but then the problem of THAT is how it will make the ai a lot slower and it can glitch hard.
Ai research is hard
spankmonkey@lemmy.world 5 days ago
It is hard because they chose to make it hard by trying to do far too many things at the same time and sell it as a complete product.
nectar45@lemmy.zip 5 days ago
Yep that is a problem too, the focus in creating general ai is really slowing down ai research on making it better at specific stuff.
Making it a master of social situations and emotional responses is getting in the way of the ai being good at intelligence and logic for example.
We need more specialized ai research instead of so much fake general intelligence