We really need to audit Proton
Comment on Proton’s Lumo AI chatbot: not end-to-end encrypted, not open source
DreamlandLividity@lemmy.world 1 day ago
The worst part is that once again, proton is trying to convince its users that it’s more secure than it really is. You have to wonder what else their are lying or deceiving about.
Vinstaal0@feddit.nl 14 hours ago
ztwhixsemhwldvka@lemmy.world 1 day ago
Mullvad FTW
DreamlandLividity@lemmy.world 1 day ago
Yes, indeed. Even so, just because there is a workaround, we should not ignore the issue (governments descending into fascism).
ztwhixsemhwldvka@lemmy.world 1 day ago
Very true
ordnance_qf_17_pounder@reddthat.com 22 hours ago
MullChad is the best for anyone who doesn’t require port forwarding
sir_pronoun@lemmy.world 1 day ago
Sauce?
DreamlandLividity@lemmy.world 1 day ago
Zero-access encryption
Your chats are stored using our battle-tested zero-access encryption, so even we can’t read them, similar to other Proton services such as Proton Mail, Proton Drive, and Proton Pass.
from protons own website.
And why this is not true is explained in the article from the main post.
loudwhisper@infosec.pub 1 day ago
They actually don’t explain it in the article. The author doesn’t seem to understand why there is a claim of e2e chat history, and zero-access for chats. The point of zero access is trust. You need to trust the provider to do it, because it’s not cryptographically veritable. Upstream there is no encryption, and zero-access means providing the service (usually, unencrypted), then encrypting and discarding the plaintext.
Of course the model needs to have access to the context in plaintext, exactly like proton has access to emails sent to non-PGP addresses. What they can do is encrypt the chat histories, because these don’t need active processing, and encrypt on the fly the communication between the model (which needs plaintext access) and the client. The same is what happens with scribe.
I personally can’t stand LLMs, I am waiting eagerly for this bubble to collapse, but this article is essentially a nothing burger.
DreamlandLividity@lemmy.world 1 day ago
You understand that, but try to read it from the point of view of an average user that knows next to nothing about cyber security and LLMs. It sounds like it’s e2ee that email and drive are famous for. To us, that’s obviously impossible but most people will interpret that marketing this way.
jjlinux@lemmy.zip 1 day ago
This I can agree on. They would have been better served and made it clearer to their users by clarifying that it is not ‘zero trust’ and not e2ee. At the end of the day, once the masses start trusting a company they stop digging deep, just read the first couple of paragraphs of the details, if at all, but some of us are always digging to make sure we can find the weakest links in our security as well as our privacy to try and strengthen them. So yeah, pretty stupid of them.
hansolo@lemmy.today 1 day ago
Both your take, and the author, seem to not understand how LLMs work. At all.
At some point, yes, an LLM model has to process clear text tokens. There’s no getting around that. Anyone who creates an LLM that can process 30 billion parameters while encrypted will become an overnight billionaire from military contracts alone. If you want absolute privacy, process locally. Lumo has limitations, but goes farther than duck.ai at respecting privacy. Your threat model and equipment mean YOU make a decision for YOUR needs. This is an option. This is not trying to be one size fits all. You don’t HAVE to use it. It’s not being forced down your throat like Gemini or CoPilot.
And their LLM. - it’s Mistral, OpenHands and OLMO, all open source. It’s in their documentation. So this article is straight up lies about that. Like… Did Google write this article? It’s simply propaganda.
Also, Proton does have some circumstances where it lets you decrypt your own email locally. Otherwise it’s basically impossible to search your email for text in the email body. They already had that as an option, and if users want AI assistants, that’s obviously their bridge. But it’s not a default setup. It’s an option you have to set up. It’s not for everyone. Some users want that. It’s not forced on everyone. Chill TF out.
DreamlandLividity@lemmy.world 1 day ago
Their AI is not local, so adding it to your email means breaking e2ee. That’s to some extent fine. You can make an informed decision about it.
But proton is not putting warning labels on this. They are trying to confuse people into thinking it is the same security as their e2ee mails. Just look at the “zero trust” bullshit on protons own page.
jjlinux@lemmy.zip 1 day ago
It does not say “zero-trust” anywhere, it says “zero-access”. The data is encrypted at rest, so it is not e2ee. They never mention end-to-end encryption for Lumo.
Which means that they are not advertising anything they are not doing or cannot do.
By posting this disinformation all you’re achieving is getting people to pedal back to all the shit services out there for “free” because many will start believing that privacy is way harder than it actually is so ‘what’s the point’ or, even worse, no alternative will help me be more private so I might as well just stop trying.
hansolo@lemmy.today 1 day ago
My friend, I think the confusion stems from you thinking you have deep technical understanding on this, when everything you say demonstrates that you don’t.
First off, you don’t even know the terminology. A local LLM is one YOU run on YOUR machine.
Lumo apparently runs on Proton servers - where their email and docs all are as well. So I’m not sure what “Their AI is not local!” even means other than you don’t know what LLMs do or what they actually are. Do you expect a 32B LLM that would use about a 32GB video card to all get downloaded and ran in a browser? Buddy…just…no.
Look, Proton can at any time MITM attack your email, or if you use them as a VPN, MITM VPN traffic if it feels like. Any VPN or secure email provider can actually do that. Mullvad can, Nord, take your pick. That’s just a fact. Google’s business model is to MITM attack your life, so we have the counterfactual already. So your threat model needs to include how much do you trust the entity handling your data not to do that, intentionally or letting others through negligence.
There is no such thing as e2ee LLMs. That’s not how any of this works. Doing e2ee for the chats to get what you type into the LLM context window, letting the LLM process tokens the only way they can, getting you back your response, and getting it to not keep logs or data, is about as good as it gets for not having a local LLM - which, remember, means on YOUR machine. If that’s unacceptable for you, then don’t use it. But don’t brandish your ignorance like you’re some expert, and that everyone on earth needs to adhere to whatever “standards” you think up that seem ill-informed.
Also, clearly you aren’t using Proton anyway because if you need to search the text of your emails, you have to process that locally, and you have to click through 2 separate warnings that tell you in all bold text “This breaks the e2ee! Are you REALLY sure you want to do this?” So your complaint about warnings is just a flag saying you don’t actually know and are just guessing.
DreamlandLividity@lemmy.world 1 day ago
Yes, that is exactly what I am saying. You seem to be confused by basic English.
They are not supposed to be able to and well designed e2ee services can’t be.
I know, yet proton is happily advertising one. Just read their page.
loudwhisper@infosec.pub 1 day ago
Scribe can be local, if that’s what you are referring to.
They also have a specific section on it at proton.me/…/proton-scribe-writing-assistant#local…
Also emails for the most part are not e2ee, they can’t be because the other party is not using encryption. They use “zero-access” which is different. It means proton gets the email in clear text, encrypts it with your public PGP key, deletes the original, and sends it to you.
See proton.me/…/proton-mail-encryption-explained
wewbull@feddit.uk 1 day ago
If an AI can work on encrypted data, it’s not encrypted.
hansolo@lemmy.today 1 day ago
SMH
No one is saying it’s encrypted when processed, because that’s not a thing that exists.
wewbull@feddit.uk 15 hours ago
End to end encryption of a interaction with a chat-bot would mean the company doesn’t decrypt your messages to it, operates on the encrypted text, gets an encrypted response which only you can decrypt and sends it to you. You then decrypt the response.
So yes. It would require operating on encrypted data.
BB84@mander.xyz 18 hours ago
homomorphic encryption?
not there yet, of course, but it is conceptually possible
@wewbull@feddit.uk