Comment on VCs are starting to partner with private equity to buy up call centers, accounting firms and other "mature companies" to replace their operations with AI

<- View Parent
pinball_wizard@lemmy.zip ⁨4⁩ ⁨weeks⁩ ago

I’m quite aware that it’s less likely to yessir technically hallucinate in these cases.

But that doesn’t address the core issue that the query was written by the LLM, without expert oversight, which still leads to situations that are effectively halucinations.

Technically, it is returning a “correct” direct answer to a question that no rational actor would ever have asked.

The meaningless, correct-looking and wrong result for the end user is still just going to be called a halucination, by common folks.

For common usage, it’s important not to promise end users that these scenarios are free of halucination.

You and I understand that technically, they’re not getting back a halucination, just an answer to a bad question.

But for the end user to understand how to use the tool safely, they still need to know that a meaningless correct looking and wrong answer is still possible (and today, still also likely).

source
Sort:hotnewtop