Some people use TPM hardware for this. Others (big cloud operators) have a ton of complicated infrastructure involving among other things special key servers on a walled off LAN. I don’t know a really satisfying answer, so will keep watching the thread.
[deleted]
Submitted 4 weeks ago by dont@lemmy.world to selfhosted@lemmy.world
Comments
solrize@lemmy.ml 4 weeks ago
utjebe@reddthat.com 4 weeks ago
I was sorting somethingting similar some time ago with www.dwarmstrong.org/remote-unlock-dropbear/
Also there is github.com/latchset/tang and github.com/latchset/clevis
Then I changed it so my server boots and offers basic functionality like DNS and any encrypted data would wait until I unlock it. When I fiddle with it could be annoying, but otherwise works very well considering I need to unlock it just a few times a year.
dont@lemmy.world 4 weeks ago
The annoyance grows with the number of hosts ;-) I still want to feel in control, which is why I’m hesitant to implement unattended decryption like with tang/clevis.
But I’m interested in the idea of not messing with the initrd-image, boot into a running system and then wait for decryption of a data-partition. Isn’t it a hassle to manually override all the relevant service declarations etc. to wait for the mount? Or how do you do that?
KaninchenSpeed@lemmy.blahaj.zone 4 weeks ago
If you already have/can run a local server, then maybe storing the luks passphrase there and running a script on it which sshs into the remote server end enters the stored passphrase on command. Maybe a simple http server triggers it, which you could auth using forward auth of your reverse proxy, so you wouldnt need to implement auth in your script.
Of cause the passphrase is stored in plain text, but that will be the case in any case not using a tpm.
dont@lemmy.world 4 weeks ago
The passphrase should be stored and transferred encrypted, but that would basically mean reimplementing mandos, a tool that was mentioned in another reply https://lemmy.world/post/38400013/20341900. Besides that yes, that’s one way I’ve also considered. An ansible script with access to all encrypted host’s initrd-ssh-keys that tries to login; if the host is waiting for decryption, provides the key, done. Needs one webhook for notification and one for me to trigger the playbook run… Maybe I will revisit this…
emrsmsrli@lemmy.world 4 weeks ago
Brkdncr@lemmy.world 4 weeks ago
Isn’t Kmip how this is solved?
dont@lemmy.world 4 weeks ago
Sort of, but this seems a bit heavy. (That being said, I was also considering pkcs#11 on a net-hsm, which seems to do basically the same…)
thelittleblackbird@lemmy.world 4 weeks ago
Chech here, I think is a more sensible way of doing things www.recompile.se/mandos
dont@lemmy.world 4 weeks ago
Interesting, do you happen to know how this “approval” works here, concretely?
thelittleblackbird@lemmy.world 4 weeks ago
I am afraid I don’t get the question.
What do you exactly mean?
Eknz@lemmy.eknz.org 4 weeks ago
Ironically, the passphrase for the encryption wouldn’t be encrypted in this scenario as claims can be decoded from the token payload if intercepted. It would also probably be stored as-is server side as well. Claims aren’t designed as secrets.
Perhaps you could authorise a request to an actual secrets manager via oidc though, allowing the volume to be unlocked.
dont@lemmy.world 4 weeks ago
Yes, I was thinking about storing encrypted keys, but still, using claims is clearly just wrong… Using a vault to store the key is probably the way to go, even though it adds another service the setup depends on.
Eknz@lemmy.eknz.org 4 weeks ago
A fall-back to the current way of unlocking the drive would probably be a good idea. It wouldn’t be fun to lose access to something because a cloud service went down or access to it was lost etc.