I did not yet read anything about it but it’s not “highly unstable” anymore
INHO never auto-upgrade anything.
Leave it a day / week or so, and then manually upgrade.
(And do a backup of application database(s) first.)
Submitted 22 hours ago by illusionist@lemmy.zip to selfhosted@lemmy.world
I did not yet read anything about it but it’s not “highly unstable” anymore
INHO never auto-upgrade anything.
Leave it a day / week or so, and then manually upgrade.
(And do a backup of application database(s) first.)
I’m running it for about a year and I never noticed any instability. To fair Android app did not always work well.
Nevertheless I never had a problem with updating it once every couple of months. Changlog and upgrade instructions always covered everything.
You will never know when breaking changes will happen, that’s why I wouldn’t recommend an auto update.
But that’s why we have semantic versioning. Automatically apply minor versions and fixes, but notify for major versions.
Immich obviously does not follow semantic versioning when 2.0 is to be considered the first stable version.
You can never be sure. Regressions do happen.
First evaluate it on your test setup.
— useless hint of the day
Some fs can do snapshotting (btrfs, zfs,…). Second best would be a current backup to restore from.
This is why I switched to ZFS with sanoid+ syncoid. Any breaking update or data corruption can be rolled back, giving you the freedom to auto-update with minimal risk.
You can’t safely update anything even if it’s ‘stable’.
But I do auto update and just make sure my backups are working properly in case something breaks that I can’t fix.
Darkassassin07@lemmy.ca 9 hours ago
I’ve had Immich auto updating alongside around 36 other docker containers for at least a year now. I’ve very rarely have issues, and just attach specific version tags to the things that have caused problems. Redis and postgres for example in both Immich and Paperless-NGX have fixed version tags because they take manual work to upgrade the old databases.
The reason I don’t really worry about it: Solid backups.
BorgBackup runs in the early AM, shortly before Watchtower updates almost all of my containers, making a backup of the entire system (not including bulk storage) first.
If I was to get up in the morning and find a service isn’t responding (Uptime-kuma notifies me via email if it can’t reach any container or service), I’ll mess with it and try to get the update working (I’ve only actually had to do this once so far, the rest has updated smoothly). Failing that, I can just extract the yesterday’s data from the most recent backup and restore a previous version.
Because of Borgs compression and de-duplication, concurrent backups of the same system can be stored in an absurdly small amount of space. I currently have 22 backups of ~532gb each, going back a full year. They are stored in 474gb of disc space. Raw, that’d be 11.8TB
Image
Lem453@lemmy.ca 2 hours ago
Exactly this, I have hourly Borg backups and also since my install is entirely on a zfs array I have zfs autosnapshot every 5 mins with retention policy. Takes almost zero cpu or memory overhead extra and means and can do just about anything via command line and revert it back with ease.
Voroxpete@sh.itjust.works 8 hours ago
Alright, you just sold me on borg backup. Well done.