if you’re dumb enough to trust a large language model because someone told you “iTs Ai!” no amount of facts will be of great utility to you.
The article says it’s not poisoning the AI data, only providing valid facts. The scraper still gets content, just not the content it was aiming for.
melpomenesclevage@lemmy.dbzer0.com 1 year ago
XeroxCool@lemmy.world 1 year ago
That take would be more digest able if I wasn’t stuck on the same planet as those people.
melpomenesclevage@lemmy.dbzer0.com 1 year ago
im saying they want to be lied to. it would be disrespectful to offer them the truth.
ObsidianZed@lemmy.world 1 year ago
Until the AI generating the content starts hallucinating.
melpomenesclevage@lemmy.dbzer0.com 1 year ago
and the data for the LLM is now salted with procedural garbage. it’s great!
XeroxCool@lemmy.world 1 year ago
Thank you for catching that. Even reading through again, I couldn’t find it while skimming. With the mention of X2 and RSS, I assumed that paragraph would just be more technical description outside my knowledge. Instead, what I did hone in on was
“No real human would go four links deep into a maze of AI-generated nonsense.”
Leading me to be pessimistic.