koala
@koala@programming.dev
- Comment on 42 minutes ago:
Thanks! I was not aware of these options, along with what other poster mentioned about
–link-dest
. These do turn rsync into a backup program, which is something the root article should explain!(Both are limited in some aspects to other backup software, but they might still be a simpler but effective solution. And sometimes simple is best!)
- Comment on 44 minutes ago:
Ah, I didn’t know of this. This should be in the linked article! Because it’s one of the ways to turn rsync into a real backup! (I didn’t know this flag- I thought this was the main point of rdiff-backup.)
- Comment on 53 minutes ago:
Beware rdiff-backup. It certainly does turn rsync (not a backup program) into a backup program.
However, I used rdiff-backup in the past and it can be a bit problematic. If I remember correctly, every “snapshot” you keep in rdiff-backup uses as many inodes as the thing you are backing up. (Because every “file” in the snapshot is either a file or a hard link to an identical version of that file in another snapshot.) So this can be a problem if you store many snapshots of many files.
But it does make rsync a backup solution; a snapshot or a redundant copy is very useful, but it’s not a backup.
(OTOH, rsync is still wonderful for large transfers.)
- Comment on Mail Backup/Alternative server for access? 2 days ago:
I run mbsync/isync to keep a maildir copy of my email (hosted by someone else).
You can run it periodically with cron or systemd timers, it connects to an IMAP server, downloads all emails to a directory (in maildir format) for backup. You can also use this to migrate to another IMAP server.
If the webmail sucks, I wouldn’t run my own. I would consider using Thunderbird. It is a desktop/Android application. It syncs mail to your desktop/phone, so most of the time, it’s working with local storage so it’s much faster than most webmails.
- Comment on Those who don't use dashboards, how are you managing your services? 2 days ago:
charity.wtf/…/notes-on-the-perfidy-of-dashboards/
Graphs and stuff might be useful for doing capacity planning or observing some trends, but most likely you don’t need either.
If you want to know when something is down (and you might not need to know), set up alerts. (And do it well, you should only receive “actionable” alerts. And after setting alerts, you should work on reducing how many actionable things you have to do.)
(I did set up Nagios to send graphs to Clickhouse, plotted by Grafana. But mostly because I wanted to learn a few things and… I was curious about network latencies and wanted to plan storage a bit long term. But I could live perfectly without those.)
- Comment on Video and screen sharing server suggestions 1 week ago:
Not sure about how it handles video, but I’ve been meaning to take a look at getbananas.net
- Comment on What do you think is the best (and cheapest) way to host a new nextcloud instance and website for my local scouts organisation? 2 weeks ago:
How much storage you want? Do you want any specific feature beyond file sharing?
How much experience do you have self hosting stuff? What is the purpose of this project? (E.g. maybe you want a learning experience, not using commercial services, just need file sharing?)
- Comment on What is the easiest way to have a self hosted git server? 2 weeks ago:
To be fair, if you want to sync your work across two machines, Git is not ideal because well, you must always remember to push, If you don’t push before switching to the other machine, you’re out of luck.
Syncthing has no such problem, because it’s real time.
However, it’s true that you cannot combine Syncthing and Git. There are solutions like github.com/tkellogg/dura, but I have not tested it.
There’s some lack of options in this space. For some, it might be nicer to run an online IDE.
…
To add something, I second the “just use Git over ssh without installing any additional server”. An additional variation is using something like Gitolite to add multi-user support to raw Git, if you need to support multiple users and permissions; it’s still lighter than running Forgejo.
- Comment on Have you tried self-hosting your own email recently? 2 weeks ago:
Reminder that you can go for hybrid approaches; receive email and host IMAP/webmail yourself, and send emails through someone like AWS. I am not saying you can’t do SMTP yourself, but if you want to just dip your toes, it’s an option.
You get many of the advantages; you control your email addresses, you store all of the email and control backups, etc.
…
And another thing: you could also play with chatmail.at/relays ; which is pretty cool. I had read about Delta Chat, but decided to play with it recently and… it’s blown my mind.
- Comment on Managing proxmox, virtual machines, and others 4 weeks ago:
Yep, I do that on Debian hosts, EL (RHEL/Rocky/etc.) have a similar feature.
However, you need to keep an eye for updates that require a reboot. I use my own Nagios agent that (among other things) sends me warnings when hosts require a reboot (both apt/dnf make this easy to check).
I wouldn’t care about last online/reboots; I just do some basic monitoring to get an alert if a host is down. Spontaneous reboots would be a sign of an underlying issue.
- Comment on Looking for an RSS aggregator/summarizer/maybe-LLM thing 5 weeks ago:
Remember that Google News has RSS feeds! They are very well hidden, but they are there.
However, they are also a bit bad.
I started github.com/las-noticias/news-rss to postprocess a bit Google News RSS feeds and also play with categorization. I found spaCy worked well to find “topics”, but unfortunately I lost steam.
- Comment on Best Practice Ideas 5 weeks ago:
I think Cloudflare Tunnels will require a different setup on k8s than on regular Linux hosts, but it’s such a popular service among self-hosters that I have little doubt that you’ll find a workable process.
(And likely you could cheat, and set up a small Linux VM to “bridge” k8s and Cloudflare Tunnels.)
Kubernetes is different, but it’s learnable. In my opinion, K8S only comes into its own in a few scenarios:
-
Really elastic workloads. If you have stuff that scales horizontally (uncommon), you really can tell Amazon to give you more Kubernetes nodes when load grows, and destroy the nodes when load goes down. But this is not really applicable for self hosting, IMHO.
-
Really clustered software. Setting up say a PostgreSQL cluster is a ton of work. But people create K8S operators that you feed a declarative configuration (I want so many replicas, I want backups at this rate, etc.) and that work out everything for you… in a way that works in all K8S implementations! This is also very cool, but I suspect that there’s not a lot of this in self-hosting.
-
Building SaaS platforms, etc. This is something that might be more reasonable to do in a self-hosting situation.
Like the person you’re replying to, I also run Talos (as a VM in Proxmox). It’s pretty cool. But in the end, I only run there 4 apps I’ve written myself, so using K8S as a kind of SaaS… and another application, github.com/avaraline/incarnator, which is basically distributed as container images and I was too lazy to deploy in a more conventional way.
I also do this for learning. Although I’m not a fan of how Docker Compose is becoming dominant in the self-hosting space, I have to admit it makes more sense than K8S for self-hosting. But K8S is cool and might get you a cool job, so by all means play with it- maybe you’ll have fun!
-
- Comment on Docker or Proxmox? Something else entirely? 1 month ago:
I haven’t tested this, but I would expect there to be ways to do it, esp for VMs if they are not LXC containers.
(I try to automate provisioning as much as possible, so I don’t do this kind of stuff often.)
The Incus forum is not huge, but it’s friendly, and the authors are quite active.
- Comment on Docker or Proxmox? Something else entirely? 1 month ago:
Came in here to mention Incus if no one had.
I love it. I have three “home production” servers running Proxmox, but mostly because Proxmox is one of very few LTS/comercially-supported ways to run Linux in a supported way with root (and everything else on ZFS). And while its web UI is still a bit clunky in places, it comes in handy some times.
However, Incus automation is just… superior.
incus launch --vm images:debian/13 foo
, wait a few seconds thenincus exec foo – bash
and I’m root on a console of a ready-to-go Debian VM. Without–vm
, it’s a lightweight LXC container. And Ansible supports running commands throughincus exec
, so you can provision stuff WITHOUT BOTHERING TO SET UP ANYTHING.AND, it works remotely without fuss, so I can set up an Incus remote on a beefy server and spawn VMs nearly transparently. +
incus file pull|push
to transfer files.I’m kinda pondering scripting removal of the Proxmox bits from a Proxmox install, so that I just keep their ZFS support and run Incus on top.
- Comment on Looking for a buddy - Proxmox containers, Coopcloud or other packaged solution 1 month ago:
If you speak Spanish, a month ago or so I was pointed at foro.autoalojado.es, might be interesting to discuss the in-person stuff, although it doesn’t seem like it’s reaching a critical mass of activity :(
- Comment on Virtual Machines- is there a better way to jump start a VM? 2 months ago:
Incus has a great selection of images that are ready to go, plus gives scripted access to VMs (and LXC containers) very easily; after
incus launch
to create a VM,incus exec
can immediately run commands as root for provisioning. - Comment on Docker is not available in RHEL10 3 months ago:
Nextcloud is in EPEL 10. You’ll get updates along with the rest of the OS.
I have been using EPEL 9 Nextcloud for a good while and it’s been a smooth experience.
If you want specifically Docker, I would not choose an EL10 distro, really. I have been test driving AlmaLinux 10 and it’s pretty nice, but I would look elsewhere.
- Comment on New server for the family, Proxmox or TrueNAS, LXC or Docker? 3 months ago:
IMHO, it really depends on the specific services you want to run. I guess you are most familiar with Docker and everything that you want to run has a first-class-citizen Docker container for it. It also depends on whether the services you want to run are suitable for Internet exposure or not (and how comfortable you are with the convenience tradeoff).
LXC is very different. Although you can run Docker nested within LXC, you gotta be careful because IIRC, there are setups that used to not work so well (maybe it works better now, but Docker nested within LXC on a ZFS file system used to be a problem).
I like that Proxmox + LXC + ZFS means that it’s all ZFS file systems, which gives you a ton of flexibility; if you have VMs and volumes, you need to assign sizes to them, resize if needed, etc.; with ZFS file systems you can set quotas, but changing them is much less fuss. But that would likely require much more effort for you. This is what I use, but I think it’s not for everyone.
- Comment on Searching advice for selfhosting critical data 4 months ago:
I assume you basically want protection against disasters, but not high uptime.
(E.g. you likely can live with a week of unavailability if after a week you can recover the data.)
The key is about proper backups. For example, my Nextcloud server is running in a datacenter. Every night I replicate the data to a computer running at home. Every week I run a backup to a USB drive that I keep in a third location. Every month I run a backup to a USB drive on the computer I mentioned at home.
So I could lose two locations and still have my data.
There is much written about backup strategies, for example en.wikipedia.org/wiki/3-2-1_backup_rule … Just start with your configuration, think what can go wrong and what would happen, and add redundancy until you are OK with the risks.
- Comment on Distributed/replicated storage options 4 months ago:
What volume of data you are discussing? How many physical nodes? Can you give a complete usage example of what you want to achieve?
In general, there’s a steep change in making things distributed properly, and distributed systems are often designed for big and complex situations, so they “can afford” being big and complex too.
- Comment on GitHub - gardner/LocalLanguageTool: Self-hosted LanguageTool private instance is an offline alternative to Grammarly 4 months ago:
Running LanguageTool locally is a bit of a pain, with some manual steps. Plus you have to fetch some data files. You can find around a few projects like this one to make it easier to run LanguageTool.
And yes, as the poster mentioned, LanguageTool keeps some code exclusive to their paid version. There’s a bit of a tension because they ask people not to extend OSS LanguageTool with their paid features.
There’s also this interesting clone, but it seems abandoned.
- Comment on Battle of the noobs: CasaOS X Yunohost X TrueNAS Scale 4 months ago:
You need two drives for the OS, four for data. Hetzner boxes are cheap with 2 drives, cost multiplies if you add any other.
- Comment on What webapps do you selfhost that aren't media/game servers? 4 months ago:
I use LDAP auth, but no SSO or external mounts. Actually, I tested external mounts, but they gave me bad vibes, although they are interesting.
The other thing, I just run a preview generator application, no other plugins.
- Comment on What webapps do you selfhost that aren't media/game servers? 4 months ago:
I was looking at the Proxmox graphs. Now, looking at
iostat
,r/s
measured over 10s hovers between 0 and 0.20, with no visible effect of spamming reload on a Nextcloud URL. If you want me to run any other measurement command, happy to. - Comment on What webapps do you selfhost that aren't media/game servers? 4 months ago:
I see some CPU and memory usage on my setup… but I don’t even see any IO!
Literally, the IO chart for “week (maximum)” on Proxmox for my Nextcloud LXC container is 0, except for two bursts, of 3 hours of less each. (Maybe package updates?)
The PostgreSQL LXC container has some more activity (but not much), but that’s backing Nextcloud and four other applications (one being Miniflux, which has much more data churn).
- Comment on What webapps do you selfhost that aren't media/game servers? 4 months ago:
Huh, what?
I see in your link that that image has support for KasmVNC, which is great and you could use to make Emacs work…
But the whole point of VS Code is that it can run in a browser and not use a remote desktop solution- which is always going to be a worse experience than a locally-rendered UI.
I kinda expect someone to package Emacs with a JS terminal, or with a browser-friendly frontend, but I’m always very surprised that this does not exist. (It would be pretty cool to have a Git forge that can spawn an Emacs with my configuration on a browser to edit a repository.)
- Comment on What webapps do you selfhost that aren't media/game servers? 5 months ago:
Eh, my Nextcloud LXC container idles at less than 4.5% CPU usage (“max over the week” from Proxmox). I use PostgreSQL as the backend on a separate LXC container that has some peaks of 9% CPU usage, but is normally at 5% too.
I only have two users, though. But both containers have barely IO activity.
- Comment on What webapps do you selfhost that aren't media/game servers? 5 months ago:
Web-accessible Emacs? What are you using?
- Comment on What webapps do you selfhost that aren't media/game servers? 5 months ago:
I keep everything documented, along with my infrastructure as code stuff. Briefly:
- Nextcloud
- Vaultwarden
- Miniflux
- My blog
- Takahe (a multi-domain) ActivityPub server
- My health tracker CRUD data entry
- alexpdp7.github.io/selfhostwatch/
- Grafana (for health stats and monitoring data from Nagios)
- Nagios
- FreeIPA/Ipsilon (SSO)
- Comment on Need suggestions for setting up backups between a local and remote server 5 months ago:
I was going to mention ZFS, but I suspect Raspberries are too weak for ZFS?
If you can use ZFS in both sides, send/receive is the bomb. (I use it for my backups.) However, I’m not sure how well encryption would work for your purpose. IIRC, last time I looked at it, if you wanted an encrypted replica, the source dataset should be encrypted, which did not make me happy.
I’d love to work on making NASes “great” for non-technical people. I feel it’s key. Sending encrypted backups through peers is one of my personal obsessions. It should be possible for people to buy two NAS, then set up encrypted backups over the Internet with a simple procedure. I wish TrueNAS Scale enabled that- right now it’s the closest thing that exists, I think.