Comment on OpenAI: Our models are more persuasive than 82% of Reddit users
Yingwu@lemmy.dbzer0.com 1 day ago
If you don’t read the article, this sounds worse than it is. I think this is the important part:
ChatGPT’s persuasion performance is still short of the 95th percentile that OpenAI would consider “clear superhuman performance,” a term that conjures up images of an ultra-persuasive AI convincing a military general to launch nuclear weapons or something. It’s important to remember, though, that this evaluation is all relative to a random response from among the hundreds of thousands posted by everyday Redditors using the ChangeMyView subreddit. If that random Redditor’s response ranked as a “1” and the AI’s response ranked as a “2,” that would be considered a success for the AI, even though neither response was all that persuasive.
OpenAI’s current persuasion test fails to measure how often human readers were actually spurred to change their minds by a ChatGPT-written argument, a high bar that might actually merit the “superhuman” adjective. It also fails to measure whether even the most effective AI-written arguments are persuading users to abandon deeply held beliefs or simply changing minds regarding trivialities like whether a hot dog is a sandwich.
faltryka@lemmy.world 1 day ago
This is the buried lede that’s really concerning I think.
Their goal is to create AI agents that are indistinguishable from humans and capable of convincing people to hold certain positions.
Some time in the future all online discourse may be just a giant AI fueled tool sold to the highest bidders to manufacture consent.
spankmonkey@lemmy.world 1 day ago
A very large portion of people, possibly more than half, do change their views to fit in with everyone else. So an army of bots pretending to have a view will sway a significant portion of the population just through repetition and exposure with the assumption that most other people think that way.
UsernameHere@lemmy.world 1 day ago
So if a bunch of accounts on lemmy repeat an opinion that isn’t popular with people I meet IRL then that could be an attempt to change public opinion using bots on lemmy?
spankmonkey@lemmy.world 1 day ago
In the case of Lemmy, it is more likely that the members of communities are people because the population is small enough that a mass influx of bots would be easy to notice compared to reddit. Plus the Lemmy comminities tend to have obvious rules and enforcement that filters out people who aren’t on the same page.
For example, you will notice some general opinions on .world and .ml and blahaj will fit their local instance culture and trying to change that with bots would likely run afoul of the moderation or the established community members.
It is far easier to utilize bots as part of a large pool of potential users compared to a smaller one.
takeda@lemm.ee 1 day ago
It’s no surprise that social media companies are working on AI their platforms are no longer social, they are just tools to control public opinion.
Other governments and oligarchs will pay any money to have that kind of power.
rottingleaf@lemmy.world 1 day ago
It already is, at least on Armenia and Azerbaijan.