Web of trust solves this problem, until people start intentionally trusting AIs as much as they do other humans, at which point it’s no longer a problem.
Comment on Mastodon is Rewinding the Clock on Social Media — in a Good Way
Hexagon@feddit.it 1 year agoWrong. The next terrible thing is mass-AI-generated propaganda and disinformation. Like in the “dead internet” theory
sparr@lemmy.world 1 year ago
OpenStars@kbin.social 1 year ago
Next? I think you misspelled "current":-D
Hexagon@feddit.it 1 year ago
My bad. But I think we haven’t seen the full extent of it yet
OpenStars@kbin.social 1 year ago
Tbf, it seems like the current "mass-AI-generated propaganda and disinformation" has actual humans behind it i.e. state-sponsored disinformation as part of modern warfare, as opposed to just sheer random BS pooped out of an algorithm designed to maximize short-term profits for the person trying to use enough buzzwords to get their algorithm bought out by someone dumb enough to fall for their pitch and short-sighted enough to not realize the wider implications... or worse yet, if they realize, who simply does not care.
It reminds me of the story behind the USA tax preparation software companies who intentionally went on a campaign to confuse military veterans and students (seriously!? what kind of evil mfers...!?), and while they got caught and even punished & fined, it was something like a decade later and ofc the original CEO and also the next one etc. had long since received their fat bonus checks, leaving the company holding the bag (liability). Thus it was "a smart move", so long as you entirely disregard ethics. What was presented as a "free gift", to generate good PR for the company, was in reality predating upon people that they deemed would be highly trusting or at least minimally likely to sue them... and they were correct. Now, watching interviews of these tech-bros, I get the same vibe as in like who cares so long as I get mine.