Comment on Private voting has been added to PieFed
rimu@piefed.social 2 months agoI can only respond in general terms because you didn't name any specific problems.
Firstly, remember than each piefed account only has one alt account and it's always the same alt account doing the votes with the same gibberish user name. If the person is always downvoting or always voting the same as another person you'll see those patterns in their alt and the alt can be banned.
Regardless, at any kind of decent scale we're going to have to use code to detect bots and bad actors. Relying on admins to eyeball individual posts activity and manually compare them isn't going to scale at all, regardless whether the user names are easy to read or not.
Max_P@lemmy.max-p.me 2 months ago
That implies trust in the person that operates the instance. It’s not a problem for piefed.social, because we can trust you. It will work for your instance. But can you trust other people’s PieFed instances? It’s open-source, I could just install it on my server, change the code to make me 2-3 alt accounts instead. Pick a random instance from lemmy.world’s instance list, would you blindly trust them to not fudge votes?
The availability of the source code doesn’t help much because you can’t prove that it’s the exact code that’s running with no modifications, and marking people running modified code as suspicious out of the box would be unfair and against open-source culture.
Sure, but you lose some visibility into who the user is. Seeing the comments is useful to get a better grasp of who they are. Maybe they’re just a serial fact checker and downvoting misinformation and posting links to reputable sources. It can also help identify if there’s other activity beside just votes, large amounts of votes are less suspicious if you see the person’s also been engaging with comments all day.
And then you circle back to, do you trust the instance admin to investigate or even respond to your messages? How is it gonna go when a big, politically aligned instance is accused of botting and the admin denies the claims but the evidence suggests it’s likely? What do we do with Threads or even an hypothetical Twitter going fediverse, with Elon still as the boss? Or Truth Social?
The bigger the instance, the easier it is to sneak a few votes in. With millions of user accounts, you can borrow a couple hundred of your long inactive user’s alts easily and it’s essentially undetectable.
I’m sorry for the pessimism but I’ve come to expect the worst from people. Anything that can be exploited, will be exploited. I also see some deanonymization exploits too: people commonly vote+comment, so with some time, you can do correlation attacks and narrow down the accounts. So to prevent that, you’d have to remove the users mapping 1:1 to a gibberish alt by at least letting the user rotate them on demand, or rotate them on a schedule, and now we can’t correlate votes to patterns anymore. And everyone’s database endlessly fills up with generated alt accounts (that you can’t delete).
The way things are, we don’t have to put any trust in an instance admin. It might as well not be there, it’s just a gateway and file host. But we can independently investigate accounts and ban them individually, without having to resort to banning whole instances, even if the admins are a bit sketchy. Because of the inherent transparency of the protocol.
rimu@piefed.social 2 months ago
Yes. You're going to have to trust someone, eventually.
I'd rather this than have to trust Lemmy admins not to abuse their access to voting data - https://lemm.ee/comment/13768482
ericjmorey@discuss.online 2 months ago
You can even question if the compiled version running on an instano is the same as the version posted to GitHub. There’s no way to even check what’s running on the server you don’t have access to.
Trust is necessary at some level if your going to participate on any hosted or federated service as you pointed out.
Socsa@sh.itjust.works 2 months ago
This is literally already the Lemmy trust model. I can easily just spin up my own instance and send out fake pub actions to brigade. The method detecting and resolving this is no different.