The problem isn't the misinformation itself, it's the rate at which misinformation is produced. Generative models lower the barrier to entry so anyone in their living room somewhere can make deepfakes of your favourite politician. The blame isn't on AI for creating misinformation, it's for making the situation worse.
Comment on AI-powered misinformation is the world's biggest short-term threat, Davos report says
burliman@lemmy.world 10 months ago
Bad humans are prompting these AI engines. Still gotta fix that. You know, root of the problem. I can tell you as an older human, misinformation has been supercharged every election. But yeah let’s blame AI this time around so we don’t have to figure out the tough problem.
Phanatik@kbin.social 10 months ago
hellothere@sh.itjust.works 10 months ago
Fallible humans are building them in the first place.
No LLM - masquerading as AI - is free of biases.
That’s not to say that ‘bad’ people prompting biased LLMs is not an issue, it very much is, but even ‘good’ people are not going to get objective results.
saltesc@lemmy.world 10 months ago
Correct. AI is simply a tool. People need to get their heads around this and stop perceiving it as some sentient magical entity with rogue prerogatives and uncontested liberties.
Whenever AI does something whack, that was a human. Everything it knows and does comes from the knowledge and instructions of humans. It’s us. If AI produces misinformation, it’s simply doing what it was taught and instructed by someone.