Comment on Sag-AFTRA votes unanimously to expand its strike to include the games industry
candybrie@lemmy.world 1 year agoHave you seen the work where they use another instance to fact check the first? The MS Research podcast made it seem like a really viable way to find hallucinations without really needing to code more. I’m curious if other people find that works or if MS researchers are just too invested in gpt.
huginn@feddit.it 1 year ago
I’ll check out that podcast but I’m deeply skeptical that one LLM can correct another since neither of them truly understands anything: it’s all statistics. Very detailed stats but still stats.
And stats will be wrong.
Before chatgpt released most Google AI engineers were looking into alternatives to LLMs as the limitations of an LLM were increasingly clear.
They’re convincing facsimiles of intelligence and a good tool for maybe 80% of basic uses.
But I agree with the consensus: they’re a dead end in our search for intelligence and their output is vastly overestimated
candybrie@lemmy.world 1 year ago
I don’t know if you’ve already found it, but I’m pretty sure this is the episode.
huginn@feddit.it 1 year ago
Follow-up: I found the episode very unconvincing.
A few points:
He seems like a salesman who has fallen for his own pitch.
candybrie@lemmy.world 1 year ago
Thanks for listening and echoing some of my own doubts. I was kind of getting the feeling that MS Researchers were too invested in gpt and not being realistic about the limitations. But I hadn’t really seen others trying the two instance method and discarding it as not useful.
huginn@feddit.it 1 year ago
Appreciate this link. Grabbed it on Spotify
sj_zero 1 year ago
They're treated like something more than they are because we anthromorphise everything, and in our brains we assume anything that can string a sentence together is intelligent. "Oh, it can form a sentence! That must mean it's pretty much already general intelligence since we gauge the intelligence of humans by the sentences they say!"