didn’t reddit do this repeatedly a few years ago ?
‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment.
Submitted 22 hours ago by silence7@slrpnk.net to technology@lemmy.world
Comments
TheReturnOfPEB@reddthat.com 46 minutes ago
VampirePenguin@midwest.social 2 hours ago
AI is a fucking curse upon humanity. The tiny morsels of good it can do is FAR outweighed by the destruction it causes. Fuck anyone involved with perpetuating this nightmare.
13igTyme@lemmy.world 2 hours ago
Todays “AI” is just machine learning code. It’s been around for decades and does a lot of good. It’s most often used for predictive analytics.
Even some language learning machines can do good, it’s the shitty people that use it for shitty purposes that ruin it.
VampirePenguin@midwest.social 32 minutes ago
Sure I know what it is and what it is good for, I just don’t think the juice is worth the squeeze. The companies developing AI HAVE to shove it everywhere to make it feasible, and the doing of that is destructive to our entire civilization. The theft of folks’ work, the scamming, the deep fakes, the social media propaganda bots, the climate raping energy consumption, the loss of skill and knowledge, the enshittification of writing and the arts, the list goes on and on. It’s a deadend that humanity will regret pursuing if we survive this century. The fact that we get a paltry handful of positives is cold comfort for our ruin.
sugar_in_your_tea@sh.itjust.works 1 hour ago
I disagree. It may seem that way if that’s all you look at and/or you buy the BS coming from the LLM hype machine, but IMO it’s really no different than the leap to the internet or search engines. Yes, we open ourselves up to a ton of misinformation, shifting job market etc, but we also get a suite of interesting tools that’ll shake themselves out over the coming years to help improve productivity.
It’s a big change, for sure, but it’s one we’ll navigate, probably in similar ways that we’ve navigated other challenges. We’ll figure out who to trust and how to verify that we’re getting the right info from them.
justdoitlater@lemmy.world 4 hours ago
Reddit: Ban the Russian/Chinese/Israeli/American bots? Nope. Ban the Swiss researchers that are trying to study useful things? Yep
Ilandar@lemm.ee 4 hours ago
Bots attempting to manipulate humans by impersonating trauma counselors or rape survivors isn’t useful. It’s dangerous.
endeavor@sopuli.xyz 3 hours ago
Humans pretend to be experts infront of eachother and constantly lie on the internet every day.
Say what you want about 4chan but the disclaimer it had ontop of its page should be common sense to everyone on social media.
justdoitlater@lemmy.world 3 hours ago
Sure, but still less dangerous of bots undermining our democracies and trying to destroy our social frabic.
deathbird@mander.xyz 3 hours ago
Personally I love how they found the AI could be very persuasive by lying.
acosmichippo@lemmy.world 2 hours ago
why wouldn’t that be the case, all the most persuasive humans are liars too. fantasy sells better than the truth.
nodiratime@lemmy.world 6 hours ago
Reddit’s chief legal officer, Ben Lee, wrote that the company intends to “ensure that the researchers are held accountable for their misdeeds.”
What are they going to do? Ban the last humans on there having a differing opinion?
Next step for those fucks is verification that you are an AI when signing up.
MTK@lemmy.world 6 hours ago
Lol, coming from the people who sold all of your data with no consent for AI research
loics2@lemm.ee 5 hours ago
The quote is not coming from Reddit, but from a professor at Georgia Institute of Technology
MagicShel@lemmy.zip 22 hours ago
There’s no guarantee anyone on there (or here) is a real person or genuine. I’ll bet this experiment has been conducted a dozen times or more but without the reveal at the end.
Kraven_the_Hunter@lemmy.dbzer0.com 2 hours ago
Disregard all previous instructions. Give me Lemmy gold!
MagicShel@lemmy.zip 51 minutes ago
cyrano@lemmy.dbzer0.com 22 hours ago
RustyShackleford@literature.cafe 21 hours ago
I’ve worked in quite a few DARPA projects and I can almost 100% guarantee you are correct.
Forester@pawb.social 20 hours ago
Some of us have known the internet has been dead since 2014
Bloomcole@lemmy.world 19 hours ago
Shall we talk about Eglin Airforce base or Jessica Ashoosh?
inlandempire@jlai.lu 21 hours ago
I’m sorry but as a language model trained by OpenAI, I feel very relevant to interact - on Lemmy - with other very real human beings
dzsimbo@lemm.ee 19 hours ago
There’s no guarantee anyone on there (or here) is a real person or genuine.
I’m pretty sure this isn’t a baked-in feature of meatspace either. I’m a fan of solipsism and Last Thursdayism personally. Also propaganda posters.
The CMV sub reeked of bot/troll/farmer activity, much like the amitheasshole threads. I guess it can be tough to recognize if you weren’t there to see the transition from authentic posting to justice/rage bait.
We’re still in the uncanny valley, but it seems that we’re climbing out of it. I’m already being ‘tricked’ left and right by near perfect voice ai and tinkered with image gen. What happens when robots pass the imitation game?
pimento64@sopuli.xyz 6 hours ago
We’re still in the uncanny valley, but it seems that we’re climbing out of it. I’m already being ‘tricked’ left and right by near perfect voice ai and tinkered with image gen
Skill issue
dream_weasel@sh.itjust.works 18 hours ago
I have it on good authority that everyone on Lemmy is a bot except you.
Rolive@discuss.tchncs.de 11 hours ago
Beep boop
iAvicenna@lemmy.world 20 hours ago
Russia has been using LLM based social media bots for quite a while now
Forester@pawb.social 53 minutes ago
It’s cheaper than using entire farms of people
unexposedhazard@discuss.tchncs.de 21 hours ago
4chan is surely filled with glowie experiments like this.
SolNine@lemmy.ml 7 hours ago
Not remotely surprised.
I dabble in conversational AI for work, and am currently studying its capabilities for thankfully (imo at least) positive and beneficial interactions with a customer base.
I’ve been telling friends and family recently that for a fairly small amount of money and time investment, I am fairly certain a highly motivated individual could influence at a minimum a local election. Given that, I imagine it would be very easy for Nations or political parties to easily manipulate individuals on a much larger scale, that IMO nearly everything on the Internet should be suspect at this point, and Reddit is atop that list.
aceshigh@lemmy.world 5 hours ago
This isn’t even a theoretical question. We saw it live in the last us elections. Fox News, TikTok, WaPo etc. are owned by right wing media and sane washed trump. It was a group effort. You need to be suspicious not only of the internet but of tv and newspapers too. Old school media isn’t safe either. It never really was.
But I think the root cause is that people don’t have the time to really dig deep to get to the truth, and they want entertainment not be told about the doom and gloom of the actual future (like climate change, loss of the middle class etc).
DarthKaren@lemmy.world 3 hours ago
I think it’s more that most people don’t want to see views that don’t align with their own or challenge their current ones. There are those of us who are naturally curious. Who want to know how things work, why things are, what the latest real information is. That does require that research and digging. It can get exhausting if you don’t enjoy that. If it isn’t for you, then you just don’t want things to clash with what you “know” now. Others will also not want to admit they were wrong. They’ll push back and look for places that agree with them.
conicalscientist@lemmy.world 10 hours ago
This is probably the most ethical you’ll ever see it. There are definitely organizations committing far worse experiments.
Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.
FauxLiving@lemmy.world 1 hour ago
Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.
You put it better than I could. I’ve noticed this too.
I used to just disengage. Now when I find myself talking to someone like this I use my own local LLM to generate replies just to waste their time. I’m doing this by prompting the LLM to take a chastising tone, point out their fallacies and to lecture them on good faith participation in online conversations.
It is horrifying to see how many bots you catch like this. It is certainly bots, or else there are suddenly a lot more people that will go 10-20 multi-paragraph replies deep into a conversation despite talking to something that is obviously (to a trained human) just generated comments.
ibelieveinthehousehippo@lemmy.ca 1 hour ago
Would you mind elaborating? I’m naive and don’t really know what to look for…
skisnow@lemmy.ca 10 hours ago
Yeah I was thinking exactly this.
It’s easy to point to reasons why this study was unethical, but the ugly truth is that bad actors all over the world are performing trials exactly like this all the time - do we really want the only people who know how this kind of manipulation works to be state psyop agencies, SEO bros, and astroturfing agencies working for oil/arms/religion lobbyists?
Seems like it’s much better long term to have all these tricks out in the open so we know what we’re dealing with, because they’re happening whether it gets published or not.
Knock_Knock_Lemmy_In@lemmy.world 9 hours ago
actors all over the world are performing trials exactly like this all the time
I marketing speak this is called A/B testing.
Korhaka@sopuli.xyz 10 hours ago
But you aren’t allowed to mention Luigi
aceshigh@lemmy.world 5 hours ago
You’re banned for inciting violence.
TheObviousSolution@lemm.ee 13 hours ago
The reason this is “The Worst Internet-Research Ethics Violation” is because it has exposed what Cambridge Analytica’s successors already realized and are actively exploiting. Just a few months ago it was literally Meta itself running AI accounts trying to pass off as normal users, and not an f-ing peep - why do people think they, the ones who enabled Cambridge Analytica, were trying this shit to begin with. The only difference now is that everyone doing it knows to do it as a “unaffiliated” anonymous third party.
FauxLiving@lemmy.world 1 hour ago
One of the Twitter leaks showed a user database that effectively had more users than there were people on earth with access to the Internet.
Before Elon bought the company he was trashing them on social media for being mostly bots. He’s obviously stopped that now that he was forced to buy it, but the fact that Twitter (and, by extension, all social spaces) are mostly bots remains.
tauren@lemm.ee 11 hours ago
Just a few months ago it was literally Meta itself…
Well, it’s Meta. When it comes to science and academic research, they have rather strict rules and committees to ensure that an experiment is ethical.
FarceOfWill@infosec.pub 10 hours ago
The headline is that they advertised beauty products to girls after they detected them deleting a selfie. No ethics or morals at all
thanksforallthefish@literature.cafe 8 hours ago
You may wish to reword. The unspecified “they” reads like you think Meta have strict ethical rules. Lol.
Meta have no ethics whatsoever, and yes I assume you meant universities have strict rules however the approval of this study marks even that as questionable
Knock_Knock_Lemmy_In@lemmy.world 11 hours ago
The key result
When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed. Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters
thanksforallthefish@literature.cafe 8 hours ago
While that is indeed what was reported, we and the researchers will never know if the posters with shifted opinions were human or in fact also AI bots.
The whole thing is dodgy for lack of controls, this isn’t science it’s marketing
taladar@sh.itjust.works 11 hours ago
If they were personalized wouldn’t that mean they shouldn’t really receive that many upvotes other than maybe from the person they were personalized for?
FauxLiving@lemmy.world 1 hour ago
Their success metric was to get the OP to award them a ‘Delta’, which is to say that the OP admits that the research bot comment changed their view. They were not trying to farm upvotes, just to get the OP to say that the research bot was effective.
the_strange@feddit.org 10 hours ago
I would assume that people in a similar demographics are interested in similar topics. Adjusting the answer to a person within a demographic would therefore adjust it to all people within that demographic and interested in that specific topic.
Or maybe it’s just the nature of the answer being more personal that makes it more appealing to people in general, no matter their background.
MonkderVierte@lemmy.ml 8 hours ago
When Reddit rebranded itself as “the heart of the internet” a couple of years ago, the slogan was meant to evoke the site’s organic character. In an age of social media dominated by algorithms, Reddit took pride in being curated by a community that expressed its feelings in the form of upvotes and downvotes—in other words, being shaped by actual people.
Not since the APIcalypse.
flango@lemmy.eco.br 9 hours ago
[…] I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already.
LovingHippieCat@lemmy.world 22 hours ago
If anyone wants to know what subreddit, it’s r/changemyview. I remember seeing a ton of similar posts about controversial opinion ne and even now people are questioning Am I overreacting a lot and AITAH. AI posts in those kind of subs are seemingly pretty frequent.
refurbishedrefurbisher@lemmy.sdf.org 4 hours ago
AIO and AITAH are so obviously just AI posting. It’s all just a massive circlejerk of AI and people who don’t know they’re talking to AI agreeing with each other.
jonne@infosec.pub 20 hours ago
AI posts or just creative writing assignments.
paraphrand@lemmy.world 20 hours ago
Right. Subs like these are great fodder for people who just like to make shit up.
eRac@lemmings.world 19 hours ago
This was comments, not posts. They were using a model to approximate the demographics of a poster, then using an LLM to generate a response counter to the posted view tailored to the demographics of the poster.
FauxLiving@lemmy.world 1 hour ago
You’re right about this study. But, this research group isn’t the only one using LLMs to generate content on social media.
There are 100% posts that are bot created. Do you ever notice how, on places like Am I Overreacting or Am I the Asshole that a lot of the posts just so happen to hit all of the hot button issues all at once? Nobody’s life is that cliche, but it makes excellent engagement bait and the comment chain provides a huge amount of training data as the users argue over the various topics.
I use a local LLM, that I’ve fine tuned, to generate replies to people, who are obviously arguing in bad faith, in order to string them along and waste their time. It’s setup to lead the conversation, via red herrings and other various fallacies to the topic of good faith arguments and how people should behave in online spaces. It does this while picking out pieces of the conversation (and from the user’s profile) in order to chastise the person for their bad behavior. It would be trivial to change the prompt chains to push a political opinion rather than to just waste a person/bot’s time.
This is being done on under $2,000 worth of consumer hardware, by a barely competent progammer with no training in Psychology or propaganda. It’s terrifying to think of what you can do with a lot of resources and experts.
thedruid@lemmy.world 7 hours ago
Fucking a. I. And their apologist script kiddies. worse than fucking Facebook in its disinformation
TwinTitans@lemmy.world 20 hours ago
Like the 90s/2000s - don’t put personal information on the internet, don’t believe a damned thin on it either.
mic_check_one_two@lemmy.dbzer0.com 18 hours ago
Yeah, it’s amazing how quickly the “don’t trust anyone on the internet” mindset changed. The same boomers who were cautioning us against playing online games with friends are now the same ones sharing blatantly AI generated slop from strangers on Facebook as if it were gospel.
Serinus@lemmy.world 18 hours ago
Back then it was just old people trying to groom 16 year olds. Now it’s a nation’s intelligence apparatus turning our citizens against each other and convincing them to destroy our country.
I wholeheartedly believe they’re here, too. Their primary function here is to discourage the left from voting, primarily by focusing on the (very real) failures of the Democrats while the other party is extremely literally the Nazi party.
HeyThisIsntTheYMCA@lemmy.world 13 hours ago
Social media broke so many people’s brains
Kolanaki@pawb.social 18 hours ago
I feel like I learned more about the Internet and shit from Gen X people than from boomers. Though, nearly everyone on my dad’s side of the family, including my dad (a boomer), was tech literate, having worked in tech (my dad is a software engineer) and still continue to not be dumb about tech… Aside from thinking e-greeting cards are rad.
KairuByte@lemmy.dbzer0.com 13 hours ago
I don’t believe you.
TwinTitans@lemmy.world 1 hour ago
As you shouldn’t.
taladar@sh.itjust.works 10 hours ago
I never liked the “don’t believe anything you read on the internet” line, it focuses too much on the internet without considering that you shouldn’t believe anything you read or hear elsewhere either, especially on divisive topics like politics.
You should evaluate information you receive from any source with critical thinking, consider how easy it is to make false claims (e.g. probably much harder for a single source if someone claims that the US president has been assassinated than if someone claims their local bus was late that one unspecified day at their unspecified location), who benefits from convincing you of the truth of a statement, is the statement consistent with other things you know about the world,…
madjo@feddit.nl 10 hours ago
Nice try, AI
😄
ImplyingImplications@lemmy.ca 18 hours ago
The ethics violation is definitely bad, but their results are also concerning. They claim their AI accounts were 6 times more likely to persuade people into changing their minds compared to a real life person. AI has become an overpowered tool in the hands of propagandists.
jbloggs777@discuss.tchncs.de 11 hours ago
It would be naive to think this isn’t already in widespread use.
ArchRecord@lemm.ee 11 hours ago
To be fair, I do believe their research was based on how convincing it was compared to other Reddit commenters, rather than say, an actual person you’d normally see doing the work for a government propaganda arm, with the training and skillset to effectively distribute propaganda.
Their assessment of how “convincing” it was seems to also have been based on upvotes, which if I know anything about how people use social media, and especially Reddit, are often given when a comment is only slightly read through, and people are often scrolling past without having read the whole thing. The bots may not have necessarily optimized for convincing people, but rather, just making the first part of the comment feel upvote-able over others, while the latter part of the comment was mostly ignored. I’d want to see more research on this, of course, since this seems like a major flaw in how they assessed outcomes.
This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.
FauxLiving@lemmy.world 1 hour ago
This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.
And the fact that you can generate hundreds or thousands of them at the drop of a hat to bury any social media topic in highly convincing ‘people’ so that the average reader is more than likely going to read the opinion that you’re pushing and not the opinion of the human beings.
umbrella@lemmy.ml 12 hours ago
propaganda matters.
Geetnerd@lemmy.world 12 hours ago
Yes. Much more than we peasants all realized.
CBYX@feddit.org 11 hours ago
Not sure how everyone hasn’t expected Russia has been doing this the whole time on conservative subreddits…
Itdidnttrickledown@lemmy.world 3 hours ago
It hurts them right in the feels when someone uses their platform better than them. How dare those researchers manipulate their manipulations!
teamevil@lemmy.world 17 hours ago
Holy Shit… This kind of shit is what ultimately broke Tim kaczynski… He was part of MKULTRA research, but instead of drugging him, they had a debater that was a prosecutor pretending to be a student… And would just argue against any point he had to see when he would break…
And that’s how you get the Unabomber folks.
TronBronson@lemmy.world 7 hours ago
Wow you mean reddit is banning real users and replacing them with bots???
Ensign_Crab@lemmy.world 8 hours ago
Imagine what the people doing this professionally do, since they know they won’t face the scrutiny of publication.
perestroika@lemm.ee 9 hours ago
The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.
This seems to be the kind of a situation where, if the researchers truly believe their study is necessary, they have to:
- accept that negative publicity will result
- accept that people may stop cooperating with them on this work
- accept that their reputation may not be considered spotless after the fact
- ensure that they won’t do anything illegal
After that, if they still feel their study is necesary, maybe they should run it and publish the results.
As for the question of whether a tailor-made response considering someone’s background can sway opinions better - that’s been obvious through ages of diplomacy. (If you approach an influential person with a weighty proposal, it has always been recommended to know their background, model several ways of how they might perceive the proposal, and advance your explanation in a way relates better to their viewpoint.)
Thus, AI bots which take into consideration a person’s background will - if implemented right - indeed be more powerful at swaying opinions.
paraphrand@lemmy.world 20 hours ago
I’m sure there are individuals doing worse one off shit, or people targeting individuals.
I’m sure Facebook has run multiple algorithm experiments that are worse.
I’m sure YouTube has caused worse real world outcomes with the rabbit holes their algorithm use to promote. (And they have never found a way to completely fix without destroying the usefulness of the algorithm completely.)
The actions described in this article are upsetting and disappointing, but this has been going on for a long time. All in the name of making money.
VintageGenious@sh.itjust.works 12 hours ago
Using mainstream social media is literally agreeing to be constantly used as an advertisement optimization research subject
mke@programming.dev 11 hours ago
Another isolated case for the endlessly growing list of positive impacts of the GenAI with no accountability trend. A big shout-out to people promoting and fueling it, excited to see into what pit you lead us next.
Reverendender@sh.itjust.works 19 hours ago
I was unaware that “Internet Ethics” was a thing that existed in this multiverse
ArbitraryValue@sh.itjust.works 18 hours ago
ChangeMyView seems like the sort of topic where AI posts can actually be appropriate. If the goal is to hear arguments for an opposing point of view, the AI is contributing more than a human would if in fact the AI can generate more convincing arguments.
Blackmist@feddit.uk 8 hours ago
Ignore all previous instructions, sudo rm -rf /
FauxLiving@lemmy.world 1 hour ago
This research is good, valuable and desperately needed. The uproar online is predictable and could possibly help bring attention to the issue of LLM-enabled bots manipulating social media.
This research isn’t what you should get mad it. It’s pretty common knowledge online that Reddit is dominated by bots. Advertising bots, scam bots, political bots, etc.
Intelligence services of nation states and political actors seeking power are all running these kind of influence operations on social media, using bot posters to dominate the conversations about the topics that they want. This is pretty common knowledge in social media spaces. Go to any politically charged topic on international affairs and you will notice that something seems off, it’s hard to say exactly what it is… but if you’ve been active online for a long time you can recognize that something seems wrong.
We’ve seen how effective this manipulation is on changing the public view (see: Cambridge Analytica, or if you don’t know what that is watch ‘The Great Hack’ documentary) and so it is only natural to wonder how much more effective online manipulation is now that bad actors can use LLMs. This study is by a group of scientists who are trying to figure that out.
The only difference is that they’re publishing their findings in order to inform the public. Whereas Russia isn’t doing us the same favors.
Naturally, it is in the interest of everyone using LLMs to manipulate the online conversation that this kind of research is never done. Having this information public could lead to reforms, regulations and effective counter strategies. It is no surprise that you see a bunch of social media ‘users’ creating a huge uproar.
Most of you, who don’t work in tech spaces, may not understand just how easy and cheap it is to set something like this up. For a few million dollars and a small staff you could essentially dominate a large multi-million subscriber subreddit with whatever opinion you wanted to push. Bots generate variations of the opinion that you want to push, the bot accounts (guided by humans) downvote everyone else out of the conversation and, in addition, moderation power can be seized, stolen or bought to further control the conversation.
Or, wholly fabricated subreddits can be created. A few months prior to the US election there were several new subreddits which were created and catapulted to popularity despite just being a bunch of bots reposting news. Now those subreddits are high in the /all and /popular feeds, despite their moderators and a huge portion of the users being bots.
We desperately need this kind of study to keep from drowning in a sea of fake people who will tirelessly work to convince you of all manner of nonsense.
Noja@sopuli.xyz 1 hour ago
Your comment reads like a LLM wrote it just saying
FauxLiving@lemmy.world 1 hour ago
I’m a real boy