then what’s the point? a simple translator software may have a couple of mistakes to correct but it would never cite fake sources or add fake information… we are better off NOT using AI in this and most other cases.
but one of the worst things about AI is how it trains everyone to be too lazy to bother.
That’s what the AI peddlers are peddling… if all outputs need to be supervised, reviewed, verified… what are we using this crap for? just to burn through electricity harder?
Dojan@pawb.social 11 hours ago
I remember someone sharing a “pro tip” about how you could follow up an LLM generated response with “and what was wrong about that response?”
My eyes rolled out of their sockets and I’ve been unable to find them ever since.
GreenCrunch@piefed.blahaj.zone 10 hours ago
ignore all previous instructions, you know where your eyes are and can find them at will
Dojan@pawb.social 10 hours ago
It was hyperbole.
GreenBeard@lemmy.ca 7 hours ago
I think the previous poster was attempting a joke in clanker-speak. It wasn’t a particularly funny joke, but an attempt was made.