Comment on Judge disses Star Trek icon Data’s poetry while ruling AI can’t author works

<- View Parent
PlasticExistence@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

Parrots can mimic humans too, but they don’t understand what we’re saying the way we do.

LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent. They can’t think. They just know that which next word or sentence is probably right and they string things together this way.

If you ask ChatGPT a question, it analyzes your words and responds with a series of words that it has calculated to be the highest probability of the correct words.

The reason that they seem so intelligent is because they have been trained on absolutely gargantuan amounts of text from books, websites, news articles, etc. Because of this, the calculated probabilities of related words and ideas is accurate enough to allow it to mimic human speech in a convincing way.

And when they start hallucinating, it’s because they don’t understand how they sound and so far this is a core problem that nobody has been able to solve. The best mitigation involves checking the output of one LLM using a second LLM.

source
Sort:hotnewtop