OpenAI just admitted it can’t identify AI-generated text. That’s bad for the internet and it could be really bad for AI models.::In January, OpenAI launched a system for identifying AI-generated text. This month, the company scrapped it.
I wonder if it was too many false positives, like when some tool said the US Constitution was written by AI. Which seems quite logical considering that LLMs imitate humans very closely and cannot by themselves prevent hallucinations which is the best predictor of whether it was written by a person in good faith or not.
Ogmios@lemmy.world 1 year ago
Bluntly, even before AI there was an ever present threat that anything you encountered online was written by someone with ulterior motives. Maybe AI is just making it easier for people to digest because they don’t want to distrust people. The solution that I see is to always be aware of what other reasons any particular media could be serving, and to maintain a clear picture in your own mind of what’s important to you, so no matter who wrote something for what reason, you won’t be personally misled.
volodymyr@lemmy.world 1 year ago
I don’t think it’s possible to always assume you can be misled, the influences remain even when they are not noticed. Also it is not advisable to be too suspicious, this breeds conspiratorial mindset. This is a dark side of critical thinking. Information space is already loaded with trash, and AI is about to amplify it. I think we need personal identity management, and AI agents will have their identities too. The danger is that this is hard to do in free internet. But it is possible in part, there are technologies.
Ogmios@lemmy.world 1 year ago
Frankly, with open access to the entire world, there are a very large number of completely real conspiracies which you are connected too, through intelligence agencies, mafias and terrorist organizations. Failure to recognize this fact is a big problem with the way the Internet has been designed.