I generally agree with the sentiment but don’t pull by latest, or at the very least don’t expect every new version to work without issue.
Most projects are very well behaved as you say but they still need to upgrade major versions now and again that contains breaking charges.
I spebt an afternoon putting my compose files into git, setting up a simple CI pipeline and use renovate to automatically create PR’s when things update. Now all my services are pinned to specific versions and when there’s an update, I get a PR to make the change along with a nice change log telling me what’s actually changed.
It’s a little more effort but things don’t suddenly break any more. Highly recommend this approach.
wilo108@lemmy.ml 3 weeks ago
I use digests in my docker compose files, and I update them when new versions are released (after reading the release notes) 🤷
suicidaleggroll@lemmy.world 3 weeks ago
Unfortunately that approach is simply not feasible unless you have very few containers or you make it your full time job.
wilo108@lemmy.ml 3 weeks ago
I dunno, I’ve never found it all that onerous.
I have a couple of dozen (perhaps ~50) containers running across a bunch of servers, I read the release notes via RSS so I don’t go hunting for news of updates or need to remember to check, and I update when I’m ready to. Security updates will probably be applied right away (unless I’ve read the notes and decided it’s not critical for my deployment(s)), for feature updates I’ll usually wait a few days (dodged a few bullets that way over the years) or longer if I’m busy, and for major releases I’ll often wait until the first point release unless there’s something new I really want.
Unless there are breaking changes it takes a few moments to update the docker-compose.yaml and then
dcp(aliased todocker compose pull) anddcdup(aliased todocker compose down && docker compose up -d && docker compose logs -f).I probably do spend upwards of maybe 15 or 20 minutes a week under normal circumstances, but it’s really not a full time job for me 🤷.
suicidaleggroll@lemmy.world 3 weeks ago
I guess it depends on the containers that are being run. I have 175 containers on my systems, and between them I get somewhere around 20 updates a day. It’s simply not possible for me to read through all of those release notes and fully understand the implications of every update before implementing them.
So instead I’ve streamlined my update process to the point that any container with an available update gets a button on an OliveTin page, and clicking that button pulls the update and restarts the container. With that in place I don’t need fully autonomous updates, I can still kick them off manually without much effort, which lets me avoid updating certain “problematic” containers until after I’ve read the release notes while still blindly updating the rest of them. Versions all get logged as well, so if something does go wrong with an update (which does happen from time to time, though it’s fairly rare) I can easily roll back to the previous image and then wait for a fix before updating again.
RIotingPacifist@lemmy.world 3 weeks ago
Yeah this is why I use Debian instead of containers, you can read the release notes on a stable release.
BradleyUffner@lemmy.world 3 weeks ago
Is manually upsetting based on trusting the accuracy of the release notes any more secure than just trusting “latest”?
CameronDev@programming.dev 3 weeks ago
You might, but I bet the majority of people set and forget.
I rely on watchtower to keep things up to date.