Nobody is blaming the AI model. We are blaming the researchers and users of AI, which is kind of the point.
Blaming a language model for lying is like charging a deer with jaywalking.
tribut@infosec.pub 1 week ago
echolalia@lemmy.ml 1 week ago
Which, in an ideal world, is why AI generated comments should be labeled.
I always break when I see a deer at the side of the road.
(Yes people can lie on the Internet. If you funded an army of propagandists to convince people by any means necessary I think you would find it expensive. People generally find lying like this to feel bad. It would take a mental toll. With AI, this looks possible for cheaper.)
Rolive@discuss.tchncs.de 1 week ago
I’m glad Google still labels the AI overview in search results so I know to scroll further for actually useful information.
FauxLiving@lemmy.world 1 week ago
They label ‘AI’ only the LLM generated content.
All of Google’s search algorithims are “AI” (i.e. Machine Learning), it’s what made them so effective when they first appeared on the scene. They just use their algorithms and a massive amount of data about you (way more than in your comment history) to target you for advertising, including political advertising.
If you don’t want AI generated content then you shouldn’t use Google, it is entirely made up of machine learning who’s sole goal is to match you with people who want to buy access to your views.
shneancy@lemmy.world 1 week ago
the researchers said all AI posts were approved by a human before posting, it was their choice how many lies to include