Comment on Something Bizarre Is Happening to People Who Use ChatGPT a Lot
Shanmugha@lemmy.world 1 week agoI’ll bait. Let’s think: -there are three humans who are 98% right about what they say, and where they know they might be wrong, they indicate it
- now there is an llm (fuck capitalization, I hate the ways they are shoved everywhere that much) trained on their output
- now llm is asked about the topic and computes the answer string
By definition that answer string can contain all the probably-wrong things without proper indicators (“might”, “under such and such circumstances” etc)
If you want to say 40% wrong llm means 40% wrong sources, prove me wrong
LovableSidekick@lemmy.world 1 week ago
It’s more up to you to prove that a hypothetical edge case you dreamed up is more likely than what happens in a normal bell curve. Given the size of typical LLM data this seems futile, but if that’s how you want to spend your time, hey knock yourself out.
Shanmugha@lemmy.world 1 week ago
Lol. Be my guest and knock yourself out, dreaming you know things