Shadowbans help prevent bot activity by preventing a bot from knowing if what they posted was actually posted. Similar to vote obfuscation. It wastes bot’s time so it’s a good thing.
Comment on Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban
rottingleaf@lemmy.zip 5 months ago
The whole problem with shadowbans is that they are not very easy to prove (without cooperation from Meta). One can be shadowbanned from one area (by geolocation), but not from another. One can be shadowbanned for some users but not for other. The decisions here can be made based on any kind of data and frankly Meta has a lot to make it efficient and yet hard to prove.
Shadowbans should just be illegal as a thing, first, and second, some of the arguments against him from the article are negligible.
I just don’t get you people hating him more than the two main candidates. It seems being a murderer is a lesser problem than being a nutcase for you.
teft@lemmy.world 5 months ago
UnderpantsWeevil@lemmy.world 5 months ago
Shadowbans help prevent bot activity by preventing a bot from knowing if what they posted was actually posted
I have not seen anything to support the theory that shadowbans reduce the number of bots on a platform. If anything, a sophisticated account run by professional engagement farmers is going to know it’s been shadowbanned - and know how to mitigate the ban - more easily than an amateur publisher producing sincere content. The latter is far more likely to run afoul of an difficult-to-detect ban than the former.
It wastes bot’s time
A bot has far more time to waste than a human. So this technique is biased against humans, rather than bots.
If you want to discourage bots, put public metrics behind a captcha. That’s far more effective than undermining visibility in a way only a professional would notice.
Dkarma@lemmy.world 5 months ago
They never said shadow bans reduce the number of bots on a platform Classic straw man.
rottingleaf@lemmy.zip 5 months ago
It wastes shadowbanned person’s time, so it’s not.
Similar to vote obfuscation.
Which sucks just as badly.
teft@lemmy.world 5 months ago
Don’t post shit that gets you shadowbanned. Problem solved.
rottingleaf@lemmy.zip 5 months ago
That’s a good solution for you, but some of us don’t generally bend over to assholes.
And that’s not serious. You’ll get shadowbanned for any kind of stuff somebody with that ability wants to shadowban you for. You won’t know the reason and what to avoid.
I got shadowbanned on Reddit a few times for basically repeating the 1988 resolution of the European Parliament on Artsakh (the one in support of reunification with Armenia).
sugar_in_your_tea@sh.itjust.works 5 months ago
So just don’t commit thought crime against Big Brother and you’ll be good?
When a platform gets to a certain size, we need to consider its effects on society as a whole. Hiding undesirable content and promoting desirable content can be a monopolistic practice for the org to get outsized impact on things it finds important. Whether that’s “good” or “bad” depends on how closely that org’s interests are aligned with the average person.
I, for one, do not think Meta’s interests are aligned with my own, so I think it’s bad that they have so much sway that they can steer the public discourse through their ranking algorithm. Shadowbanning is just another way for the platform to get their desired message out.
Instead of trying to restrict yourself to only posting what the platform wants you to post, you should be seeking alternatives that allow you to post what you think is valuable to post.
UnderpantsWeevil@lemmy.world 5 months ago
Truth is treason in the Empire of Lies
kava@lemmy.world 5 months ago
I’ve seen reddit accounts who regularly posted comments for months all at +1 vote and never received any response or reply at all because nobody had ever seen their comments. They got hit with some automod shadowban they were yelling into the void, likely wondering why nobody ever felt they deserved to be heard.
I find this unsettling and unethical. I think people have a right to be heard and deceiving people like this feels wrong.
There are other methods to deal with spam that aren’t potentially harmful.
There’s also an entirely different discussion about shadowbans being a way to silence specific forms of speech. Today it may be crazies or hateful speech, but it can easily be any subversive speech should the administration change.
I agree with other commenter, it probably shouldn’t be allowed.
CaptainSpaceman@lemmy.world 5 months ago
You think hes better than Biden? Why?
ricdeh@lemmy.world 5 months ago
Because he thinks of him as a murderer?
rottingleaf@lemmy.zip 5 months ago
Because I can gather a pretty believable list of pros and cons for him as a person, which make sense together and didn’t change too sharply. Not the case with Biden.
CaptainSpaceman@lemmy.world 5 months ago
Way to not answer
rottingleaf@lemmy.zip 5 months ago
I mean, you can look at a wall and say it’s a door, it’s your right.
kralk@lemm.ee 5 months ago
Biden, the guy who’s been president for four years? You can’t tell who he is as a person?
rottingleaf@lemmy.zip 5 months ago
This makes my point stronger. You must be very smart if you can characterize this specific man and get some idea which groups he represents, what is his strategy and to what end.
hedgehog@ttrpg.network 5 months ago
Why should shadow bans be illegal?
rottingleaf@lemmy.zip 5 months ago
Because a good person would never need those. If you want to have shadowbans on your platform, you are not a good one.
A bit like animal protection, while animals can’t have rights balanced by obligations, you would want to keep people cruel to animals somewhere where you are not.
hedgehog@ttrpg.network 5 months ago
Because a good person would never need those. If you want to have shadowbans on your platform, you are not a good one.
This basically reads as “shadow bans are bad and have no redeeming factors,” but you haven’t explained why you think that.
If you’re a real user and you only have one account (or have multiple legitimate accounts) and you get shadow-banned, it’s a terrible experience. Shadow bans should never be used on “real” users even if they break the ToS, and IME, they generally aren’t. That’s because shadow bans solve a different problem.
In content moderation, if a user posts something that’s unacceptable on your platform, generally speaking, you want to remove it as soon as possible. Depending on how bad the content they posted was, or how frequently they post unacceptable content, you will want to take additional measures. For example, if someone posts child pornography, you will most likely ban them and then (as required by law) report all details you have on them and their problematic posts to the authorities.
Where this gets tricky, though, is with bots and multiple accounts.
If someone is making multiple accounts for your site - whether by hand or with bots - and using them to post unacceptable content, how do you stop that?
Your site has a lot of users, and bad actors aren’t limited to only having one account per real person. A single person - let’s call them a “Bot Overlord” - could run thousands of accounts - and it’s even easier for them to do this if those accounts can only be banned with manual intervention. You want to remove any content the Bot Overlord’s bots post and stop them from posting more as soon as you realize what they’re doing. Scaling up your human moderators isn’t reasonable, because the Bot Overlord can easily outscale you - you need an automated solution.
Suppose you build an algorithm that detects bots with incredible accuracy - 0% false positives and an estimated 1% false negatives. Great! Then, you set your system up to automatically ban detected bots.
A couple days later, your algorithm’s accuracy has dropped - from 1% false negatives to 10%. 10 times as many bots are making it past your algorithm. A few days after that, it gets even worse - first 20%, then 30%, then 50%, and eventually 90% of bots are bypassing your detection algorithm.
You can update your algorithm, but the same thing keeps happening. You’re stuck in an eternal game of cat and mouse - and you’re losing.
What gives? Well, you made a huge mistake when you set the system up to ban bots immediately. In your system, as soon as a bot gets banned, the bot creator knows. Since you’re banning every bot you detect as soon as you detect them, this gives the bot creator real-time data. They can basically reverse engineer your unpublished algorithm and then update their bots so as to avoid detection.
One solution to this is ban waves. Those work by detecting bots (or cheaters, in the context of online games) and then holding off on banning them until you can ban them all at once.
Great! Now the Bot Overlord will have much more trouble reverse-engineering your algorithm. They won’t know specifically when a bot was detected, just that it was detected within a certain window - between its creation and ban date.
But there’s still a problem. You need to minimize the damage the Bot Overlord’s accounts can do between when you detect them and when you ban them.
You could try shortening the time between ban waves. The problem with this approach is that the ban wave approach is more effective the longer that time period is. If you had an hourly ban wave, for example, the Bot Overlord could test a bunch of stuff out and get feedback every hour.
Shadow bans are one natural solution to this problem. That way, as soon as you detect it, you can prevent a bot from causing more damage. The Bot Overlord can’t quickly detect that their account was shadow-banned, so their bots will keep functioning, giving you more information about the Bot Overlord’s system and allowing you to refine your algorithm to be even more effective in the future, rather than the other way around.
I’m not aware of another way to effectively manage this issue. Do you have a counter-proposal?
Out of curiosity, do you have any experience working in content moderation for a major social media company? If so, how did that company balance respecting user privacy with effective content moderation without shadow bans, accounting for the factors I talked about above?
kava@lemmy.world 5 months ago
Nice writeup but there’s one key piece of information here that’s wrong in the context of reddit.
The “bot overlord” can easily tell if an account is shadowbanned. I use my trusty puppeteer or selenium script to spam my comments. After every comment, I load up the page under a control account (or even just a fresh page with no cookies/cache, maybe even through VPN if I’m feeling fancy) and check if my comment is there.
Comment is not there after a certain threshold of checks? Guess I’m shadowbanned, take the account off the list and add another one of the hundreds I have to the active list
The fact is that no matter what you do, there will be bots and spammers. No matter what you do, there will be cheaters in online games and people trying to exploit.
It’s a constant battle and it’s an impossible one. But you have to try and come up with solutions but you always have to balance the costs of those solutions with the benefits.
Shadowbanning on reddit doesn’t solve the problem it aims to fix. It does however have the potential for harm to individuals, especially naive ones who don’t fully understand how websites work.
I don’t think the ends justify the means. Just like stop and frisk may stop a certain type of crime or may not, but it definitely does damage to specific communities
rottingleaf@lemmy.zip 5 months ago
“Major social media companies” in my opinion shouldn’t exist. ICQ and old Skype were major enough.
Your posts reads like my ex-military uncle’s rants when we talk about censorship, mass repressions, dissenters’ executions and so on.
These instruments can be used solely against rapists, thieves, murderers and so on. Usually they are not, because most (neurotypical) of us are apes and want power. That’s why major social media shouldn’t exist.
UnderpantsWeevil@lemmy.world 5 months ago
Shadowbans should just be illegal as a thing
I mean, regional coding makes sense from a language perspective. I don’t really want to see a bunch of foreign language recommendations on my feed, unless I’m explicitly searching for content in that language.
But I do agree there’s a lack of transparency. And I further agree that The Algorithm creates a rarified collection of “popular” content entirely by way of excluding so much else. The end result is a very generic stream of crap in the main feed and some truly freaky gamed content that’s entirely focused on click-baiting children. Incidentally, jesus fucking christ whomever is responsible for promoting “unboxing” videos should be beaten to death with a flaming bag of nalpam.
None of this is socially desirable or good, but it all appears to be incredibly profitable. Its a social media environment that’s converged on “Oops! All Ads!” and is steadily making its way to “Oops! All scams!” as the content gets worse and worse and worse.
rottingleaf@lemmy.zip 5 months ago
Yes, thank you for explaining the same thing politely, I had a slight hangover yesterday.
The problem is with unneeded people making unneeded decisions for you anonymously (for them), centrally and obviously with no transparency.
The advantages of the Internet as it came into existence for us were disadvantages for some people. Trapping people inside social media with one entry point and having the actual communication there allows for control which the initial architecture was intended to make hard.
UnderpantsWeevil@lemmy.world 5 months ago
The problem is with unneeded people making unneeded decisions for you anonymously (for them), centrally and obviously with no transparency.
In business, it’s described as a kind of Principal-Agent problem. What happens when the person you’re working with has goals that deviate from what you contracted with them to do?
A classic “unsolved problem” of social relationships.
rottingleaf@lemmy.zip 5 months ago
I agree it’s an unsolved problem, but have you contracted police to, well, police your area? Had Soviet citizens contract NKVD?
It’s rather between the two. In fact it’s a mechanism imposed on you with power, but there’s a lot of effort to conceal it as an imperfect market.
FlyingSquid@lemmy.world 5 months ago
I bet you scream about your first amendment rights being violated whenever a moderator deletes your posts.
Buttons@programming.dev 5 months ago
A problem is that social media websites are simultaneously open platforms with Section 230 protections, and also publishers who have free speech rights. Those are contradictory, so which is it?
Perhaps @rottingleaf was speaking morally rather than legally. For example, I might say “I believe everyone in America should have access to healthcare”; if you respond “no, there is no right to healthcare” you would be right, but you missed my point. I was expressing an moral aspiration.
I think shadowbans are a bad mix of censorship and hard to detect. Morally, I believe they should be illegal. If a company wants to ban someone, they can be up front about someone and ban them; make it clear what you are doing. To implement this legally, we could alter Section 230 protections so that they don’t apply to companies performing shadowbans.
Dkarma@lemmy.world 5 months ago
They are in no way publishers…ugh you people who don’t know shit about the law are insufferable.
QuadratureSurfer@lemmy.world 5 months ago
Feel free to educate us instead of just saying the equivalent of “you’re wrong and I hate reading comments like yours”.
But I think, in general, the alteration to Section 230 that they are proposing makes sense as a way to keep these companies in check for practices like shadowbanning especially if those tools are abused for political purposes.
rottingleaf@lemmy.zip 5 months ago
I bet you think this reply was sharp-minded and on spot and something else.
FlyingSquid@lemmy.world 5 months ago
How much would you like to bet? I accept PayPal.