Comment on OpenAI Says It's Scanning Users' ChatGPT Conversations and Reporting Content to the Police
yesman@lemmy.world 1 day ago
“When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts,” the blog post notes. “If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.”
See? Even the people who make AI don’t trust it with important decisions. And the “trained” humans don’t even see it if the AI doesn’t flag it first. This is just a microcosm of why AI is always the weakest link in any workflow.
This is exactly the use-case for an LLM and even OpenAI can’t make it work.
Perspectivist@feddit.uk 1 day ago
I don’t think it is. LLM is language generating tool, not language understanding one.
iglou@programming.dev 1 day ago
That is actually incorrect. It is also a language understanding tool. You don’t have an LLM without NLP. NLP includes processing and understanding natural language.
Perspectivist@feddit.uk 1 day ago
But it doesn’t understand - at least not in the sense humans do. When you give it a prompt, it breaks it into tokens, matches those against its training data, and generates the most statistically likely continuation. It doesn’t “know” what it’s saying, it’s just producing the next most probable output. That’s why it often fails at simple tasks like counting letters in a word - it isn’t actually reading and analyzing the word, just predicting text. In that sense it’s simulating understanding, not possessing it.
iglou@programming.dev 1 day ago
You’re entering a more philosophical debate than a technical one, because for this point to make any sense, you’d have to define what “understanding” language means for a human in a level as low as what you’re describing for an LLM.
Can you affirm that what a human brain does to understand language is so different to what an LLM does?
I’m not saying an LLM is smart, but saying that it doesn’t understand, when having computers “understand” natural language is the core of NLP, is meh.