Comment on Robot moderation could be coming to your town

<- View Parent
auk@slrpnk.net ⁨3⁩ ⁨weeks⁩ ago

For example how do you think the bot would’ve handled the vegan community debacle that happened.

That’s not a situation it’s completely equipped to handle. It can decide what the community’s opinion of someone is, but it’s not going to be able to approach any kind of judgement call, in terms of whether a post by a permitted user is unexpectedly dangerous misinformation that the admins need to remove. That’s a judgement call that humans can’t effectively come to a conclusion on, so definitely the bot won’t be able to do any better.

There is some interesting insight to be had. One of the big concerns that people had about the bot’s premise was that it would shut down minority opinions, with vegans as a perfect example.

I tried going back and having it judge lemmy.world/post/18691022, but there may not be recent activity for a lot of those users, so there’s a risk of false negatives. The only user it found which it wanted to do anything to was EndlessApollo@lemmy.world, who it wanted to greylist, meaning they’re allowed to post, but anything of theirs that gets downvotes will get removed. That sounds right to me, if you look at their modlog.

I also spent some time just now asking it to look at comments from vegantheoryclub.com and modern comments from !vegan@lemmy.world, and it didn’t want to ban or greylist anybody. That’s in keeping with how it’s programmed. Almost all users on Lemmy are fine. They have normal participation to counterbalance anything unpopular that they like to say, or any single bad day where they get in a big argument. The point is to pick out the users that only like to pick fights or start trouble, and don’t have a lot that they do other than that, which is a significant number. You can see some of them in these comments. I think that broader picture of people’s participation, and leeway to get a little out of pocket for people who are normal human people, is useful context that the bot can include that would be time-prohibitive when human mods are trying to do it when they make decisions.

The literal answer to your question is that I don’t think it would have done anything about the Vegan cat food issue other than letting everyone hash it out, and potentially removing some comments from EndlessApollo. But that kind of misinformation referee position isn’t quite the role I envisioned for it.

Like you said it sounds like a good way of decentralizing moderation so that we have less problems with power tripping moderators and more transparent decisions.

I wasn’t thinking in these terms when I made it, but I do think this is a very significant thing. We’re all human. It’s just hard to be fair and balanced all of the time when you’re given sole authority over who is and isn’t allowed to speak. Initially, I was looking at the bot as its own entity with its own opinions, but I realized that it’s not doing anything more than detecting the will of the community with as good a fidelity as I can achieve.

I just want it so that communities can keep their specific values while easing their moderation burden.

This was a huge concern. We went back and forth over a big number of specific users and situations to make sure it wasn’t going to do this, back in the early days of testing it out and designing behaviors.

I think the vegan community is a great example. I think there was one vegan user who was a big edge case in the early days, and they wound up banned, because all they wanted to talk about was veganism, and they kept wanting to talk about it to non-vegans in a pretty unfriendly fashion. I think their username was vegan-related also. I can’t remember the specifics, but that was the only case like that where the bot was silencing a vegan person, and we hemmed and hawed a little but wound up leaving them banned.

source
Sort:hotnewtop