Comment on Proxmox: Make CT Fuse Mount Available to Host
mlfh@lemmy.sdf.org 3 days ago
The rclone fuse mount is essentially running in the memory of the container, and doesn’t translate back into the filesystem that the host presents from itself into that container.
Since rclone is available in the debian repos, the simplest and easiest option would be to do the rclone mount on the host and then pass that via bind mounting into the Plex container.
If you want to keep the rclone mounting containerized though (or if your Proxmox host is clustered, you want to mount it on the host, and you want the mount to be shared between your nodes), you can use rclone’s experimental but built-in nfs server feature: rclone.org/commands/rclone_serve_nfs/
Make sure your 2 containers can talk to each other over a secure network (“this server does not implement any authentication so any client will be able to access the data”), start the nfs server in the rclone container, and mount it via nfs in the Plex container.
Good luck!
modeh@piefed.social 2 days ago
That explains quite a lot, thank you for elaborating on it.
I am trying to keep the host as minimal as possible, that’s why I’m avoiding doing the mount directly on it and instead containerizing everything.
I will give the rclone NFS approach a shot, it’s definitely a worthwhile option.
erock@lemmy.ml 2 days ago
I went down a similar path as you. The entire proxmox community argues making it an appliance with nothing extra installed. But the second you need to share data — like a nas — the tooling is a huge pain. I couldn’t reliably find a solution that felt right.
So my solution was to make my nas a zfs pool on my host. Bind mounting works for CTs but not VMs which is an annoying feature asymmetry. So I decided to also install an nfs server that exposed my nas.
I know that’s not what you want but just wanted to share what I did.
The feature asymmetry between CTs and VMs basically made CTs not part of my orchestration.