FauxLiving
@FauxLiving@lemmy.world
- Comment on A Judge Accepted AI Video Testimony From a Dead Man 4 days ago:
AI, which is inherently a misrepresentation of truth
Oh, you’re one of those
- Comment on A Judge Accepted AI Video Testimony From a Dead Man 5 days ago:
In the US criminal justice system, Sentencing happens after the Trial. A mistrial requires rules to be violated during the trial.
Also, there were at least 3 people in that room that both have a Juris Doctor and know the Arizona Court Rules, one of them is representing the defendant. Not a single one of them had any objections about allowing this statement to be made.
- Comment on A Judge Accepted AI Video Testimony From a Dead Man 5 days ago:
They can’t appeal on this issue because the defense didn’t object to the statement and, therefore, did not preserve the issue for appeal.
- Comment on A Judge Accepted AI Video Testimony From a Dead Man 5 days ago:
AI should absolutely never be allowed in court. Defense is probably stoked about this because it’s obviously a mistrial. Judge should be reprimanded for allowing that shit
You didn’t read the article.
This isn’t grounds for a mistrial, the trial was already over. This happened during the sentencing phase. The defense didn’t object to the statements.
From the article:
Jessica Gattuso, the victim’s right attorney that worked with Pelkey’s family, told 404 Media that Arizona’s laws made the AI testimony possible. “We have a victim’s bill of rights,” she said. “[Victims] have the discretion to pick what format they’d like to give the statement. So I didn’t see any issues with the AI and there was no objection. I don’t believe anyone thought there was an issue with it.”
- Comment on A Judge Accepted AI Video Testimony From a Dead Man 5 days ago:
This is just weird uninformed nonsense.
The reason that outbursts, like gasping or crying, can cause a mistrial is because they can unfairly influence a jury and so the rules of evidence do not allow them. This isn’t part of trial, the jury has already reached a verdict.
Victim impact statements are not evidence and are not governed by the rules of evidence.
It’s ludicrous that this was allowed and honestly is grounds to disbar the judge. If he allows AI nonsense like this, then his courtroom can not be relied upon for fair trials.
More nonsense.
If you were correct, and there were actual legal grounds to object to these statements then the prosecuting attorney would have objected to them.
Here’s an actual attorney. From the article:
Jessica Gattuso, the victim’s right attorney that worked with Pelkey’s family, told 404 Media that Arizona’s laws made the AI testimony possible. “We have a victim’s bill of rights,” she said. “[Victims] have the discretion to pick what format they’d like to give the statement. So I didn’t see any issues with the AI and there was no objection. I don’t believe anyone thought there was an issue with it.”
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 1 week ago:
It depends, but it’d be really hard to tell. I type around 90-100 WPM, so my comment only took me a few minutes.
If they’re responding within a second or two with a giant wall of text it could be a bot, but it may just be a person who’s staring at the notification screen waiting to reply. It’s hard to say.
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 1 week ago:
I would have gotten away with it if it were not for you kids!
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 1 week ago:
I think the simplest way to explain it is that the average person isn’t very skilled at rhetoric. They argue inelegantly. Over a long time of talking online, you get used to talking with people and seeing how they respond to different rhetorical strategies.
In these bot infested social spaces it seems like there are a large number of commenters who just seem to argue way too well and also deploy a huge amount of fallacies. This could be explained, individually, by a person who is simply choosing to argue in bad faith; but, in these online spaces there are just too many commenters who seem to deploy these tactics.
In addition, what you see in some of these spaces are commenters who seem to have a very structured way of arguing. Like they’ve picked your comment apart into bullet points and then selected arguments against each point which are technically on topic but misleading in a way.
I’ll admit that this is all very subjective. It’s entirely based on my perception and noticing patterns that may or may not exist. This is exactly why we need research on the topic, like in the OP, so that we can create effective and objective metrics for tracking this.
For example, if you could somehow measure how many good faith comments vs how many fallacy-laden comments in a given community there would likely be a ratio that is normal (i.e. there are 10 people who are bad at arguing for every 1 person who is good at arguing and, of those skilled arguers 10% of them are commenting in bad faith and using fallacies) and you could compare this ratio to various online topics to discover the ones that appear to be botted.
That way you could objectively say that on the topic of Gun Control on this one specific subreddit we’re seeing an elevated ratio of bad faith:good faith scoring commenters and, therefore, we know that this topic/subreddit is being actively LLM botted. This information could be used to deploy anti-bot counter measures (captchas, for example).
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 1 week ago:
The research in the OP is a good first step in figuring out how to solve the problem.
That’s in addition to anti-bot measures. I’ve seen some sites that require you to solve a cryptographic hashing problem before accessing. It doesn’t slow a regular person down, but it does require anyone running a bot to provide a much larger amount of compute power to each bot which increases the cost to the operator.
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 1 week ago:
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 1 week ago:
I’m a real boy
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 1 week ago:
They label ‘AI’ only the LLM generated content.
All of Google’s search algorithims are “AI” (i.e. Machine Learning), it’s what made them so effective when they first appeared on the scene. They just use their algorithms and a massive amount of data about you (way more than in your comment history) to target you for advertising, including political advertising.
If you don’t want AI generated content then you shouldn’t use Google, it is entirely made up of machine learning who’s sole goal is to match you with people who want to buy access to your views.
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 1 week ago:
I think when posting on a forum/message board it’s assumed you’re talking to other people
That would have been a good position to take in the early days of the Internet, it is a very naive assumption to make now. Even in the 2010s actors with a large amount of resources (state intelligence agencies, advertisers, etc) could hire human beings from low wage English speaking countries to generate fake content online.
LLMs have only made this cheaper, to the point where I assume that most of the commenters on political topics are likely bots.
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 1 week ago:
Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.
You put it better than I could. I’ve noticed this too.
I used to just disengage. Now when I find myself talking to someone like this I use my own local LLM to generate replies just to waste their time. I’m doing this by prompting the LLM to take a chastising tone, point out their fallacies and to lecture them on good faith participation in online conversations.
It is horrifying to see how many bots you catch like this. It is certainly bots, or else there are suddenly a lot more people that will go 10-20 multi-paragraph replies deep into a conversation despite talking to something that is obviously (to a trained human) just generated comments.
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 1 week ago:
Their success metric was to get the OP to award them a ‘Delta’, which is to say that the OP admits that the research bot comment changed their view. They were not trying to farm upvotes, just to get the OP to say that the research bot was effective.
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 1 week ago:
One of the Twitter leaks showed a user database that effectively had more users than there were people on earth with access to the Internet.
Before Elon bought the company he was trashing them on social media for being mostly bots. He’s obviously stopped that now that he was forced to buy it, but the fact that Twitter (and, by extension, all social spaces) are mostly bots remains.
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 1 week ago:
This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.
And the fact that you can generate hundreds or thousands of them at the drop of a hat to bury any social media topic in highly convincing ‘people’ so that the average reader is more than likely going to read the opinion that you’re pushing and not the opinion of the human beings.
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 1 week ago:
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 1 week ago:
Aside from thinking e-greeting cards are rad.
As a late Gen-X/early Millennial, e-greeting cards are rad.
Kids these days don’t know how good they have it with their gif memes and emoji-supporting character encodings… get off my lawn you young whippersnappers!
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 1 week ago:
You’re right about this study. But, this research group isn’t the only one using LLMs to generate content on social media.
There are 100% posts that are bot created. Do you ever notice how, on places like Am I Overreacting or Am I the Asshole that a lot of the posts just so happen to hit all of the hot button issues all at once? Nobody’s life is that cliche, but it makes excellent engagement bait and the comment chain provides a huge amount of training data as the users argue over the various topics.
I use a local LLM, that I’ve fine tuned, to generate replies to people, who are obviously arguing in bad faith, in order to string them along and waste their time. It’s setup to lead the conversation, via red herrings and other various fallacies to the topic of good faith arguments and how people should behave in online spaces. It does this while picking out pieces of the conversation (and from the user’s profile) in order to chastise the person for their bad behavior. It would be trivial to change the prompt chains to push a political opinion rather than to just waste a person/bot’s time.
This is being done on under $2,000 worth of consumer hardware, by a barely competent progammer with no training in Psychology or propaganda. It’s terrifying to think of what you can do with a lot of resources and experts.
- Comment on ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’ | The most persuasive “people” on a popular subreddit turned out to be a front for a secret AI experiment. 1 week ago:
This research is good, valuable and desperately needed. The uproar online is predictable and could possibly help bring attention to the issue of LLM-enabled bots manipulating social media.
This research isn’t what you should get mad it. It’s pretty common knowledge online that Reddit is dominated by bots. Advertising bots, scam bots, political bots, etc.
Intelligence services of nation states and political actors seeking power are all running these kind of influence operations on social media, using bot posters to dominate the conversations about the topics that they want. This is pretty common knowledge in social media spaces. Go to any politically charged topic on international affairs and you will notice that something seems off, it’s hard to say exactly what it is… but if you’ve been active online for a long time you can recognize that something seems wrong.
We’ve seen how effective this manipulation is on changing the public view (see: Cambridge Analytica, or if you don’t know what that is watch ‘The Great Hack’ documentary) and so it is only natural to wonder how much more effective online manipulation is now that bad actors can use LLMs. This study is by a group of scientists who are trying to figure that out.
The only difference is that they’re publishing their findings in order to inform the public. Whereas Russia isn’t doing us the same favors.
Naturally, it is in the interest of everyone using LLMs to manipulate the online conversation that this kind of research is never done. Having this information public could lead to reforms, regulations and effective counter strategies. It is no surprise that you see a bunch of social media ‘users’ creating a huge uproar.
Most of you, who don’t work in tech spaces, may not understand just how easy and cheap it is to set something like this up. For a few million dollars and a small staff you could essentially dominate a large multi-million subscriber subreddit with whatever opinion you wanted to push. Bots generate variations of the opinion that you want to push, the bot accounts (guided by humans) downvote everyone else out of the conversation and, in addition, moderation power can be seized, stolen or bought to further control the conversation.
Or, wholly fabricated subreddits can be created. A few months prior to the US election there were several new subreddits which were created and catapulted to popularity despite just being a bunch of bots reposting news. Now those subreddits are high in the /all and /popular feeds, despite their moderators and a huge portion of the users being bots.
We desperately need this kind of study to keep from drowning in a sea of fake people who will tirelessly work to convince you of all manner of nonsense.
- Comment on SMS/MMS backup and sync? 3 weeks ago:
Welcome to the club :)
- Comment on Messing up my weekend schedule 3 weeks ago:
It’s frustrating enough to make you lay colored eggs 🤔
- Comment on SMS/MMS backup and sync? 3 weeks ago:
If you’re using KDE, look at KDE Connect: community.kde.org/KDEConnect
- Comment on Uncle Sam abruptly turns off funding for CVE program. Yes, that CVE program 3 weeks ago:
fr fr
- Comment on CVE Board members launch the CVE Foundation, a dedicated, non-profit to continue identifying vulnerabilities, after the US ended its contract with Mitre 3 weeks ago:
The CVE system protects everyone that uses computers. It is a public service that forms the core of cybersecurity in the US and many other places. It does not cost the database any more money if people use it to provide services to clients.
Letting a private corporation take it over and put it behind a paywall now means that security, like so many other things, will only be available to people with money. It will make software and hardware more expensive by adding yet another license fee or subscription if you want software that gets security updates.
In addition, a closed database is just less useful. This system works because when one person notifies the system of an exploit then every other person now knows. That kind of system is much higher quality if you have more people that are able to access it.
An industry being created and earning money by providing cybersecurity services shows how useful such a system is for everyone. There are good paying jobs that depend on this data being freely available. New startups only need to provide service, they don’t need to raise the funds to buy into the security database because it is a public service. They also pay taxes (a significant amount if they’re charging $30,000 per audit), more than enough profit for the government to operate a database.
- Comment on Lemmy has the ideal number of posts for me. Just enough to have a good time but not too many that I'm scrolling forever 4 weeks ago:
I find the people that I disagree with have much better points, with significantly fewer radicals, idiots or crusaders.
Honestly, it gives me hope.
My best experiences online have been as part of smaller communities where you can actually know and recognize other people. I see people commenting on threads and I can remember them talking in a different thread (or multiple threads). So it is much easier to know ‘ok that guy is touchy about this thing but is otherwise a decent person’ and not treat everyone like a 1-dimensional character.
- Comment on Jack Dorsey and Elon Musk would like to ‘delete all IP law’ | TechCrunch 4 weeks ago:
Otherwise I think that the idea of deleting all IP laws is just wishful (and naive) thinking, assuming people would cooperate and build on each other’s inventions/creations.
Given the state the world is currently in, I don’t see that happening soon.
There are plenty of examples of open sharing systems that are functional.
Science, for example. Nobody ‘owns’ the formulas that calculate orbits or the underlying mathematics that AI models are built on like Transformer networks or convolutional networks. The information is openly shared and given away to everyone that wants it and it is so powerful it has completely reshaped society everywhere on the Earth (except the Sentinel Islands).
Open Source projects, like Linux, are the foundation of the modern tech world. The ‘IP’ is freely available and you can copy or modify it as much as you’d like. Linus ‘owns’ the Linux project but anyone is free to take a copy of the Linux source code and modify it to whatever extent that they would like and form their own project.
Much of the software and services that people use are built on top of open source tools made by volunteers, for free; and most of the useful knowledge and progress for human society results from breakthroughs made in the sciences, who’s discoveries are also free and openly shared.
- Comment on Proton 4 weeks ago:
He tweeted once and is therefore canceled because social media doesn’t understand nuance or context
- Comment on Proton 4 weeks ago:
It’s literally a nothing situation that social media, in its drive to find outrage in every single thing, has blown completely out of proportion.