It’s way easier to figure that out than check ChatGPT hallucinations. There’s usually someone saying why a response in SO is wrong, either in another response or a comment. You can filter most of the garbage right at that point. You get none of that information with ChatGPT. The data spat out is not equivalent.
deweydecibel@lemmy.world 5 months ago
That’s an important point, and and it ties into the way ChatGPT and other LLMs take advantage of a flaw in the human brain:
Because it impersonates a human, people are more inherently willing to trust it. To think it’s “smart”. It’s dangerous how people who don’t know any better (and many people that do know better) will defer to it, consciously or unconsciously, as an authority and never second guess it.
And the fact it’s a one on one conversation, no comment sections, no one else looking at the responses to call them out as bullshit, the user just won’t second guess it.
KeenFlame@feddit.nu 5 months ago
Your thinking is extremely black and white. Many many, probably most actually, second guess chat bot responses.
gravitas_deficiency@sh.itjust.works 5 months ago
Think about how dumb the average person is.
Now, think about the fact that half of everyone is dumber than that.