I definitely think it’s important to make people aware of the difference in the fedeiverse. Especially since that is not how it worked in non-federated social media
Comment on The fediverse has a bullying problem
Zak@lemmy.world 1 month ago
Some people have privacy expectations that are not realistic in an unencrypted, federated, heterogeneous environment run by hobbyist volunteers in their spare time.
It you have something private and sensitive to share with a small audience, make a group chat on Signal. Don’t invite any reporters.
candyman337@sh.itjust.works 1 month ago
skullgiver@popplesburger.hilciferous.nl 1 month ago
[deleted]TORFdot0@lemmy.world 1 month ago
Just because the average user doesn’t consider whether they should trust the platform, doesn’t mean the fediverse is less trustworthy. It’s not. Nothing online should be considered trustworthy if it’s not encrypted.
You still have to consider whether Facebook is trustworthy with your posts and click data, whether the thousands of advertisers they sell your info too are trustworthy. Whether the persons you message are trustworthy and that they won’t get hacked.
About the same risks as with trusting a fediverse instance operator except they don’t have the same motivations to sell your data.
I’m not sure if you are aware of fediblock which allows instance operators to coordinate banning and defederating bad actors from the network. And of course you can always mute or block any user or instance you wish independently of your instance’s block list.
Your data being leaked to “malicious servers” in this case also requires approving a follow to a user on that instance or having your profile set to public (and at that point you should expect your content to be public)
I do think you are right that it is a paradigm shift of thinking for new users who aren’t familiar with federation. But I think anyone who wants to join will just either have to give up control to big platforms and stay put or shift their thinking.
letzlo@feddit.nl 1 month ago
It’s perhaps a communication problem, where the privacy settings should clearly state this. Or these settings shouldn’t be offered. But maybe this current structure is fine for most people?
Regardless, it’s how existing social media used to work. In that sense, federated social media can’t offer an alternative and that could be a problem for some.
arakhis_@feddit.org 1 month ago
This poster… its like every other social media platform is not anonymous?!
Why should this one be? Did you really think i.e. reddit wouldn’t corpo-analyze the fork out of your data with data science practices? Anonymous upvotes? LOL
iltg@sh.itjust.works 1 month ago
it’s not unrealistic to keep trust at the server level. following your rationale, you can’t trust my reply, or any, because any server could modify the content in transit. or hide posts. or make up posts from actors to make them look bad.
if you assume the network is badly behaved, fedi breaks down. it makes no sense to me that everything is taken for granted, except privacy.
servers will deliver, not modify, not make up stuff, not dos stuff, not spam you, but apparently obviously will leak your content?
fedi models trust at the server level, not user. i dont need to trust you, i need to trust just your server admin, and if i dont i defederate
PhilipTheBucket@ponder.cat 1 month ago
if you assume the network is badly behaved, fedi breaks down. it makes no sense to me that everything is taken for granted, except privacy.
This is backwards in my opinion.
What you described is exactly how it works. Everything in the network is potentially badly behaved. You need to put on rate limits, digital signatures for activities back to actors, blocks for particular instances, and so on, specifically because whenever you are talking with someone else on the network, they might be badly behaved.
In general, it’s okay in practice to be a little bit loose with it. If you get some spam from a not-yet-blocked instance, or you send some server a message which it has a bug and it doesn’t deliver, then it is okay. But, if you’re sending a message which can compromise someone’s privacy if mishandled, then all of a sudden you have to care on a stricter level. Because it’s not harmless anymore if the server which is receiving the message is broken (or malicious).
So yes, privacy is different. In practice it’s usually okay to just let users know that nothing they’re sending is really private. Email works that way, Lemmy DMs work that way, it’s okay. But if you start telling people their stuff is really private, and you’re still letting it interact with untrusted servers (which is all of them), you have to suddenly care on this whole other level and do all sorts of E2EE and verification stuff, or else you’re lying to your users. In my opinion.
iltg@sh.itjust.works 1 month ago
taking care of bad servers is instance admin business, you’re conflating the user concerns with the instance owner concerns
generally this thread and previous ones have such bad takes on fedi structure: a federated and decentralized system must delegate responsibility and trust
if you’re concerned about spam, that’s mostly instance owner business. it’s like that with every service: even signal has spam, and signal staff deals with it, not you. you’re delegating trust
if you want privacy, on signal you need to delegate privacy to software. on fedi to server owners too, but that’s the only extra trust you need to pay
sending private messages is up to you. if i send a note and address it only to you, i’m delegating trust to you to not leak it, to the software to keep it confidential, and to the server owner to not snoop on it. on signal you still need to trust the software and the recipient
this whole “nothing is private on fedi” is a bad black/white answer to a gray issue. nothing is private ever, how can you trust AES and RSA? do you know every computer passing your packet is safe from side chain attacks to break your encryption? you claimed to work in security in another thread, i would expect you to know the concept of “threat modeling”
Zak@lemmy.world 1 month ago
There’s a significant distinction between servers that are actively malicious as you’re describing and servers that aren’t fully compatible with certain features, or that are simply buggy.
Lemmy, for example modifies posts federated from other platforms to fit its format constraints. One of them is that a post from Mastodon with multiple images attached will only show one image on Lemmy. Mastodon does it too: inline images from a Lemmy post don’t show on vanilla Mastodon.
I’ll note that Lemmy’s version numbers all start with 0. So do Piixelfed’s. That implies the software is unfinished and unstable.
RobotToaster@mander.xyz 1 month ago
Nothing is private on the fediverse, and Mastodon’s bodge only gives the illusion of privacy. There should be zero expectation that any fediverse software will follow their non-standard extensions.
zedage@lemm.ee 1 month ago
I think the confusion from fediverse’s claims of privacy stem from poor enunciation from its proponents. It is definitely more private in the amount of passive data mining for ad tracking purposes compared to for profit social media. The architecture is designed to discourage instance managers from implementing ad-tech from building sophisticated user profiles of your behaviour in order to serve you more targeted ads from the people that manage the infrastructure. There’s no monitoring of clicks, click through rates, time spent on the platform, the type of content you like, etc. And the price for that mechanism is, making public, data that cannot be monetised on a large scale, which for profit social media guaranteed “privacy” to(in quotes because it was private from prying eyes through E2EE but not your keys not your data.)
I can see where the confusion might arise for nontechnical people who aren’t familiar with the technical aspects of ActivityPub implementations. I don’t think there should be any confusion for technical people in understanding the architecture clearly guarantees a total lack of private data, seeing as how decentralisation works.