The problem with LLMs and other generative AI is that they’re not completely useless. People’s jobs are on the line much of the time, so it would really help if they were completely useless, but they’re not. Generative AI is certainly not as good as its proponents claim, and critically, when it fucks up, it can be extremely hard for a human to tell, which eats away a lot of their benefits, but they’re not completely useless. For the most basic example, give an LLM a block of text and ask it how to improve grammar or to make a point clearer, and then compare the AI generated result with the original, and take whatever parts you think the AI improved.
Everybody knows this, but we’re all pretending it’s not the case because we’re caring people who don’t want the world to be drowned in AI hallucinations, we don’t want to have the world taken over by confidence tricksters who just fake everything with AI, and we don’t want people to lose their jobs. But sometimes, we are so busy pretending that AI is completely useless that we forget that it actually isn’t completely useless. The reason they’re so dangerous is that they’re not completely useless.
ag10n@lemmy.world 2 days ago
It’s almost as if nuance and context matters.
How much energy does a human use to write a Wikipedia article? Now also measure the accuracy and completeness of the article.
Now do the same for AI.
Objective metrics are what is missing, because much of what we hear is “phd-level inference” and it’s still just a statistical, probabilistic generator.
pcmag.com/…/with-gpt-5-openai-promises-access-to-…