Comment on OpenAI: Our models are more persuasive than 82% of Reddit users
spankmonkey@lemmy.world 19 hours agoTheir goal is to create AI agents that are indistinguishable from humans and capable of convincing people to hold certain positions.
A very large portion of people, possibly more than half, do change their views to fit in with everyone else. So an army of bots pretending to have a view will sway a significant portion of the population just through repetition and exposure with the assumption that most other people think that way.
UsernameHere@lemmy.world 19 hours ago
So if a bunch of accounts on lemmy repeat an opinion that isn’t popular with people I meet IRL then that could be an attempt to change public opinion using bots on lemmy?
spankmonkey@lemmy.world 18 hours ago
In the case of Lemmy, it is more likely that the members of communities are people because the population is small enough that a mass influx of bots would be easy to notice compared to reddit. Plus the Lemmy comminities tend to have obvious rules and enforcement that filters out people who aren’t on the same page.
For example, you will notice some general opinions on .world and .ml and blahaj will fit their local instance culture and trying to change that with bots would likely run afoul of the moderation or the established community members.
It is far easier to utilize bots as part of a large pool of potential users compared to a smaller one.
UsernameHere@lemmy.world 16 hours ago
It just has to be proportional. Reports on these bot farms have shown that they absolutely go into small niche areas to influence people. Facebook groups being one of the most notable that comes to mind.
spankmonkey@lemmy.world 16 hours ago
What do you think are the views being promoted by bots on lemmy?
Are their accounts you think are bots or are you assuming differing opinions from people you know in real life are bots? I know people who have wildly different views in real life, some of which I avoid because of those views.