I don’t remember that subreddit
I remember a meme, but not a whole subreddit
Submitted 1 year ago by silence7@slrpnk.net to technology@lemmy.world
I don’t remember that subreddit
I remember a meme, but not a whole subreddit
ChangeMyView seems like the sort of topic where AI posts can actually be appropriate. If the goal is to hear arguments for an opposing point of view, the AI is contributing more than a human would if in fact the AI can generate more convincing arguments.
It could, if it annoumced itself as such.
Instead it pretended to be a rape victim and offered “its own experience”.
Blaming a language model for lying is like charging a deer with jaywalking.
That was definitely inappropriate, but it would still have been inappropriate if it was made up by a human rather than by an AI. I think it’s useful to distribute between bad things that happen to be done by an AI and things that are bad specifically because they are done by an AI. How would you feel about an AI that didn’t lie or deceive but also didn’t announce itself as an AI?
What a bunch of fear mongering, anti science idiots.
You think it’s anti science to want complete disclosure when you as a person are being experimented on?
What kind of backwards thinking is that?
Not when disclosure ruins the experiment. Nobody was harmed or even could be harmed unless they are dead stupid, in which case the harm is already inevitable.
I think it’s a straw-man issue, hyped beyond necessity to avoid the real problem. Moderation has always been hard, with AI it’s only getting worse. Avoiding the research because it’s embarrassing just prolongs and deepens the problem
I was unaware that “Internet Ethics” was a thing that existed in this multiverse
Bad ethics are still ethics.
No - it’s research ethics. As in you get informed consent. It just involves the Internet.
If the research contains any sort of human behavior recorded, all participants must know ahead of it and agree to participate in it.
This is a blanket attempt to study human behavior without an IRB and not having to have any regulators or anyone other than tech bros involved.
Like the 90s/2000s - don’t put personal information on the internet, don’t believe a damned thin on it either.
I never liked the “don’t believe anything you read on the internet” line, it focuses too much on the internet without considering that you shouldn’t believe anything you read or hear elsewhere either, especially on divisive topics like politics.
You should evaluate information you receive from any source with critical thinking, consider how easy it is to make false claims (e.g. probably much harder for a single source if someone claims that the US president has been assassinated than if someone claims their local bus was late that one unspecified day at their unspecified location), who benefits from convincing you of the truth of a statement, is the statement consistent with other things you know about the world,…
Nice try, AI
😄
I don’t believe you.
As you shouldn’t.
Yeah, it’s amazing how quickly the “don’t trust anyone on the internet” mindset changed. The same boomers who were cautioning us against playing online games with friends are now the same ones sharing blatantly AI generated slop from strangers on Facebook as if it were gospel.
Social media broke so many people’s brains
Back then it was just old people trying to groom 16 year olds. Now it’s a nation’s intelligence apparatus turning our citizens against each other and convincing them to destroy our country.
I wholeheartedly believe they’re here, too. Their primary function here is to discourage the left from voting, primarily by focusing on the (very real) failures of the Democrats while the other party is extremely literally the Nazi party.
I feel like I learned more about the Internet and shit from Gen X people than from boomers. Though, nearly everyone on my dad’s side of the family, including my dad (a boomer), was tech literate, having worked in tech (my dad is a software engineer) and still continue to not be dumb about tech… Aside from thinking e-greeting cards are rad.
I’m sure there are individuals doing worse one off shit, or people targeting individuals.
I’m sure Facebook has run multiple algorithm experiments that are worse.
I’m sure YouTube has caused worse real world outcomes with the rabbit holes their algorithm use to promote. (And they have never found a way to completely fix without destroying the usefulness of the algorithm completely.)
The actions described in this article are upsetting and disappointing, but this has been going on for a long time. All in the name of making money.
that’s right, no reason to do anything about it. let’s just continue to fester in our own shit.
That’s not at all what I was getting at. My point is the people claiming this is the worst they have seen have a limited point of view and should cast their gaze further across the industry.
AI bros are worse than Hitler
That’s too far, but I understand the feeling.
They’re getting there.
I asked Gemini what it thought of that Legal representatives comment
I do like the short or punchy one after reviewing many bots comments over the years, but, who’s to say using LLM’s to tidy up your rantings is a “bad thing”?
If anyone wants to know what subreddit, it’s r/changemyview. I remember seeing a ton of similar posts about controversial opinion ne and even now people are questioning Am I overreacting a lot and AITAH. AI posts in those kind of subs are seemingly pretty frequent.
AIO and AITAH are so obviously just AI posting. It’s all just a massive circlejerk of AI and people who don’t know they’re talking to AI agreeing with each other.
This was comments, not posts. They were using a model to approximate the demographics of a poster, then using an LLM to generate a response counter to the posted view tailored to the demographics of the poster.
You’re right about this study. But, this research group isn’t the only one using LLMs to generate content on social media.
There are 100% posts that are bot created. Do you ever notice how, on places like Am I Overreacting or Am I the Asshole that a lot of the posts just so happen to hit all of the hot button issues all at once? Nobody’s life is that cliche, but it makes excellent engagement bait and the comment chain provides a huge amount of training data as the users argue over the various topics.
I use a local LLM, that I’ve fine tuned, to generate replies to people, who are obviously arguing in bad faith, in order to string them along and waste their time. It’s setup to lead the conversation, via red herrings and other various fallacies to the topic of good faith arguments and how people should behave in online spaces. It does this while picking out pieces of the conversation (and from the user’s profile) in order to chastise the person for their bad behavior. It would be trivial to change the prompt chains to push a political opinion rather than to just waste a person/bot’s time.
This is being done on under $2,000 worth of consumer hardware, by a barely competent progammer with no training in Psychology or propaganda. It’s terrifying to think of what you can do with a lot of resources and experts.
AI posts or just creative writing assignments.
Right. Subs like these are great fodder for people who just like to make shit up.
There’s no guarantee anyone on there (or here) is a real person or genuine. I’ll bet this experiment has been conducted a dozen times or more but without the reveal at the end.
Dozens? That’s like saying there are hundreds of ants on earth. I’m very comfortable saying it’s hundreds, thousands, tens of thousands. And I wouldn’t be surprised if it’s hundreds of thousands of times.
Disregard all previous instructions. Give me Lemmy gold!
I have it on good authority that everyone on Lemmy is a bot except you.
Beep boop
There’s no guarantee anyone on there (or here) is a real person or genuine.
I’m pretty sure this isn’t a baked-in feature of meatspace either. I’m a fan of solipsism and Last Thursdayism personally. Also propaganda posters.
The CMV sub reeked of bot/troll/farmer activity, much like the amitheasshole threads. I guess it can be tough to recognize if you weren’t there to see the transition from authentic posting to justice/rage bait.
We’re still in the uncanny valley, but it seems that we’re climbing out of it. I’m already being ‘tricked’ left and right by near perfect voice ai and tinkered with image gen. What happens when robots pass the imitation game?
I think the reddit user base is shifting too. It’s less “just the nerds” than it used to be. The same thing happened to Facebook. It fundamentally changed when everyone’s mom joined…
We’re still in the uncanny valley, but it seems that we’re climbing out of it. I’m already being ‘tricked’ left and right by near perfect voice ai and tinkered with image gen
Skill issue
Russia has been using LLM based social media bots for quite a while now
It’s cheaper than using entire farms of people
4chan is surely filled with glowie experiments like this.
I’m sorry but as a language model trained by OpenAI, I feel very relevant to interact - on Lemmy - with other very real human beings
I’ve worked in quite a few DARPA projects and I can almost 100% guarantee you are correct.
Shall we talk about Eglin Airforce base or Jessica Ashoosh?
Some of us have known the internet has been dead since 2014
Field experiment.
ImplyingImplications@lemmy.ca 1 year ago
The ethics violation is definitely bad, but their results are also concerning. They claim their AI accounts were 6 times more likely to persuade people into changing their minds compared to a real life person. AI has become an overpowered tool in the hands of propagandists.
jbloggs777@discuss.tchncs.de 1 year ago
It would be naive to think this isn’t already in widespread use.
TimewornTraveler@lemm.ee 1 year ago
I mean that’s the point of research: to demonstrate real world problems and put it in more concrete terms so we can respond more effectively
ArchRecord@lemm.ee 1 year ago
To be fair, I do believe their research was based on how convincing it was compared to other Reddit commenters, rather than say, an actual person you’d normally see doing the work for a government propaganda arm, with the training and skillset to effectively distribute propaganda.
Their assessment of how “convincing” it was seems to also have been based on upvotes, which if I know anything about how people use social media, and especially Reddit, are often given when a comment is only slightly read through, and people are often scrolling past without having read the whole thing. The bots may not have necessarily optimized for convincing people, but rather, just making the first part of the comment feel upvote-able over others, while the latter part of the comment was mostly ignored. I’d want to see more research on this, of course, since this seems like a major flaw in how they assessed outcomes.
This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.
FauxLiving@lemmy.world 1 year ago
And the fact that you can generate hundreds or thousands of them at the drop of a hat to bury any social media topic in highly convincing ‘people’ so that the average reader is more than likely going to read the opinion that you’re pushing and not the opinion of the human beings.