I definitely think it’s important to make people aware of the difference in the fedeiverse. Especially since that is not how it worked in non-federated social media
Comment on The fediverse has a bullying problem
Zak@lemmy.world 4 weeks ago
Some people have privacy expectations that are not realistic in an unencrypted, federated, heterogeneous environment run by hobbyist volunteers in their spare time.
It you have something private and sensitive to share with a small audience, make a group chat on Signal. Don’t invite any reporters.
candyman337@sh.itjust.works 4 weeks ago
skullgiver@popplesburger.hilciferous.nl 4 weeks ago
This is exactly why ActivityPub makes for such a mediocre replacement for the big social media apps. You have to let go of any assumptions that at least some of your data remains exclusive to the ad algorithm and accept that everything you post or look at or scroll past is being recorded by malicious servers. Which, in turn, kind of makes it a failure, as replacing traditional social media is exactly what it’s supposed to do.
The Fediverse also lacks tooling to filter out the idiots and assholes. That kind of moderation is a lot easier when you have a centralised database and moderation staff on board, but the network of tiny servers with each their own moderation capabilities will promote the worst behaviour as much as the best behaviour.
But really, the worst part is the UX for apps. Fediverse apps suck at setting expectations. Of course Lemmy publishes when you’ve upvoted what posts, that’s essential for how the protocol works, but what other Reddit clone has a public voting history? Same with anyone using any form of the word “private” or even “unlisted”, as those only apply in a perfect world where servers have no bugs and where there are no malicious servers.
TORFdot0@lemmy.world 4 weeks ago
Just because the average user doesn’t consider whether they should trust the platform, doesn’t mean the fediverse is less trustworthy. It’s not. Nothing online should be considered trustworthy if it’s not encrypted.
You still have to consider whether Facebook is trustworthy with your posts and click data, whether the thousands of advertisers they sell your info too are trustworthy. Whether the persons you message are trustworthy and that they won’t get hacked.
About the same risks as with trusting a fediverse instance operator except they don’t have the same motivations to sell your data.
I’m not sure if you are aware of fediblock which allows instance operators to coordinate banning and defederating bad actors from the network. And of course you can always mute or block any user or instance you wish independently of your instance’s block list.
Your data being leaked to “malicious servers” in this case also requires approving a follow to a user on that instance or having your profile set to public (and at that point you should expect your content to be public)
I do think you are right that it is a paradigm shift of thinking for new users who aren’t familiar with federation. But I think anyone who wants to join will just either have to give up control to big platforms and stay put or shift their thinking.
skullgiver@popplesburger.hilciferous.nl 4 weeks ago
Building trust is hard. It’s easier to trust a few companies than to trust a million unknown servers. It’s why I prefer Wikipedia over amazingnotskgeneratedatalltopicalinformarion.biz when I’m looking up simple facts.
Furthermore, Facebook isn’t selling data directly. At least, not if they’re following the law. They got caught doing and fined doing that once and it’s not their main mode of operation. Like Google, their data is their gold mine, selling it directly would be corporate suicide. They simply provide advertisers with spots to put an ad, but when it comes to data processing, they’re doing all the work before advertisers get a chance to look at a user’s profile.
On the other hand, scraping ActivityPub for advertisers would be trivial. It’d be silly to go through the trouble to set up something like Threads if all you want is information, a basic AP server that follows ever Lemmy community and soaks up gigabytes an hour can be written as a weekend project.
Various Chinese data centers are scraping the hell out of my server, and they carry referer headers from other Fediverse servers. I’ve blocked half of East Asia and new IP addresses keep popping up. Whatever data you think Facebook may be selling, someone else is already selling based on your Fediverse behaviour. Whatever Petal Search and all the others are doing, I don’t believe for a second they’re being honest about it.
Most Fediverse software defaults to federation and accepting inbound follow requests. At least, Mastodon, Lemmy, GoToSocial, Kbin, and one of those fish named mastodonlikes did. Profiles are often public by default too. The vulnerability applies to a large section of the Fediverse default settings.
I’d like to think people would switch to the Fediverse despite the paradigm shift. The privacy risks are still there if there’s only one company managing them, so I’d prefer it if people used appropriate tools for sharing private stuff. I think platforms like Circles (a Matrix-based social media system) which leverage encryption to ensure nobody can read things they shouldn’t have been able to, are much more appropriate. Perhaps a similar system can be laid on top of ActivityPub as well (after all, every entity already has a public/private key pair).
letzlo@feddit.nl 4 weeks ago
It’s perhaps a communication problem, where the privacy settings should clearly state this. Or these settings shouldn’t be offered. But maybe this current structure is fine for most people?
Regardless, it’s how existing social media used to work. In that sense, federated social media can’t offer an alternative and that could be a problem for some.
arakhis_@feddit.org 4 weeks ago
This poster… its like every other social media platform is not anonymous?!
Why should this one be? Did you really think i.e. reddit wouldn’t corpo-analyze the fork out of your data with data science practices? Anonymous upvotes? LOL
iltg@sh.itjust.works 4 weeks ago
it’s not unrealistic to keep trust at the server level. following your rationale, you can’t trust my reply, or any, because any server could modify the content in transit. or hide posts. or make up posts from actors to make them look bad.
if you assume the network is badly behaved, fedi breaks down. it makes no sense to me that everything is taken for granted, except privacy.
servers will deliver, not modify, not make up stuff, not dos stuff, not spam you, but apparently obviously will leak your content?
fedi models trust at the server level, not user. i dont need to trust you, i need to trust just your server admin, and if i dont i defederate
PhilipTheBucket@ponder.cat 3 weeks ago
if you assume the network is badly behaved, fedi breaks down. it makes no sense to me that everything is taken for granted, except privacy.
This is backwards in my opinion.
What you described is exactly how it works. Everything in the network is potentially badly behaved. You need to put on rate limits, digital signatures for activities back to actors, blocks for particular instances, and so on, specifically because whenever you are talking with someone else on the network, they might be badly behaved.
In general, it’s okay in practice to be a little bit loose with it. If you get some spam from a not-yet-blocked instance, or you send some server a message which it has a bug and it doesn’t deliver, then it is okay. But, if you’re sending a message which can compromise someone’s privacy if mishandled, then all of a sudden you have to care on a stricter level. Because it’s not harmless anymore if the server which is receiving the message is broken (or malicious).
So yes, privacy is different. In practice it’s usually okay to just let users know that nothing they’re sending is really private. Email works that way, Lemmy DMs work that way, it’s okay. But if you start telling people their stuff is really private, and you’re still letting it interact with untrusted servers (which is all of them), you have to suddenly care on this whole other level and do all sorts of E2EE and verification stuff, or else you’re lying to your users. In my opinion.
iltg@sh.itjust.works 3 weeks ago
taking care of bad servers is instance admin business, you’re conflating the user concerns with the instance owner concerns
generally this thread and previous ones have such bad takes on fedi structure: a federated and decentralized system must delegate responsibility and trust
if you’re concerned about spam, that’s mostly instance owner business. it’s like that with every service: even signal has spam, and signal staff deals with it, not you. you’re delegating trust
if you want privacy, on signal you need to delegate privacy to software. on fedi to server owners too, but that’s the only extra trust you need to pay
sending private messages is up to you. if i send a note and address it only to you, i’m delegating trust to you to not leak it, to the software to keep it confidential, and to the server owner to not snoop on it. on signal you still need to trust the software and the recipient
this whole “nothing is private on fedi” is a bad black/white answer to a gray issue. nothing is private ever, how can you trust AES and RSA? do you know every computer passing your packet is safe from side chain attacks to break your encryption? you claimed to work in security in another thread, i would expect you to know the concept of “threat modeling”
Zak@lemmy.world 4 weeks ago
There’s a significant distinction between servers that are actively malicious as you’re describing and servers that aren’t fully compatible with certain features, or that are simply buggy.
Lemmy, for example modifies posts federated from other platforms to fit its format constraints. One of them is that a post from Mastodon with multiple images attached will only show one image on Lemmy. Mastodon does it too: inline images from a Lemmy post don’t show on vanilla Mastodon.
I’ll note that Lemmy’s version numbers all start with 0. So do Piixelfed’s. That implies the software is unfinished and unstable.
RobotToaster@mander.xyz 4 weeks ago
Nothing is private on the fediverse, and Mastodon’s bodge only gives the illusion of privacy. There should be zero expectation that any fediverse software will follow their non-standard extensions.
zedage@lemm.ee 4 weeks ago
I think the confusion from fediverse’s claims of privacy stem from poor enunciation from its proponents. It is definitely more private in the amount of passive data mining for ad tracking purposes compared to for profit social media. The architecture is designed to discourage instance managers from implementing ad-tech from building sophisticated user profiles of your behaviour in order to serve you more targeted ads from the people that manage the infrastructure. There’s no monitoring of clicks, click through rates, time spent on the platform, the type of content you like, etc. And the price for that mechanism is, making public, data that cannot be monetised on a large scale, which for profit social media guaranteed “privacy” to(in quotes because it was private from prying eyes through E2EE but not your keys not your data.)
I can see where the confusion might arise for nontechnical people who aren’t familiar with the technical aspects of ActivityPub implementations. I don’t think there should be any confusion for technical people in understanding the architecture clearly guarantees a total lack of private data, seeing as how decentralisation works.