Danksy
@Danksy@lemmy.world
- Comment on We have to stop ignoring AI’s hallucination problem 5 months ago:
If we ignore the other poster, do you think the logic in my previous comment is circular?
- Comment on We have to stop ignoring AI’s hallucination problem 5 months ago:
That was what I was trying to say, I can see that the wording is ambiguous.
- Comment on We have to stop ignoring AI’s hallucination problem 5 months ago:
If a solution is correct then a solution is correct. If a correct solution was generated randomly that doesn’t make it less correct. It just means that you may not always get correct solutions, which is why they are checked after.
- Comment on We have to stop ignoring AI’s hallucination problem 5 months ago:
It’s not circular. LLMs cannot be fluent because fluency comes from an understanding of the language. An LLM is incapable of understanding so it is incapable of being fluent. It may be able to mimic it but that is a different thing.
- Comment on We have to stop ignoring AI’s hallucination problem 5 months ago:
It’s not a bug, it’s a natural consequence of the methodology. A language model won’t always be correct when it doesn’t know what it is saying.