Comment on It’s too easy to make AI chatbots lie about health information, study finds

sugar_in_your_tea@sh.itjust.works ⁨2⁩ ⁨days⁩ ago

I sincerely hope people understand what LLMs are and what they’re aren’t. They’re sophisticated search engines that aggregate results into natural language and refine results based on baked in prompts (in addition to what you provide), and if there are gaps, the LLM invents something to fill it.

If the model was trained on good data and the baked-in prompt is reasonable, you can get reasonable results. But even in the best case, there’s still the chance that the LLM hallucinates something, that just how they work.

For most queries, I’m mostly looking for which search terms to use for checking original sources, or sometimes a reference to pull out something I already know, but am having trouble remembering (i.e. I will recognize the correct answer). For those use cases, it’s pretty effective.

Don’t use an LLM as a source of truth, use it as an aid for finding truth. Be careful out there!

source
Sort:hotnewtop