suicidaleggroll
@suicidaleggroll@lemm.ee
- Comment on Bernstein Posits That A 10 Percent Baseline US Tariff On Raw Semiconductors Is "Not Going To Do All That Much," But PCs, Servers, And Smartphones Are About To Get Pricier By ~40 Percent 4 hours ago:
It wouldn’t matter. The public doesn’t listen directly to politicians, it gets filtered through the media first, and the media picks and chooses which parts they actually report. The people who would actually hear this already know. The people who would need to hear it never will because Fox won’t show it to them.
- Comment on Massive X data leak affects over 200 million users. 2 days ago:
Yes, and Bitwarden+SimpleLogin. Bitwarden to keep track of login info including the alias that is used for that site. SimpleLogin is where the aliasing is actually handled, they have a decent UI for enabling/disabling or generating reverse aliases (for outgoing emails) when needed.
It does take a little more effort to manage it, but it’s worth the payoff. I’ve been using this setup for about 9 months now and I finally got my first spam email a week ago. I looked at the address it was sent to, it was a site I ordered something from about 6 months ago. I sent them a message letting them know that either someone at their company is selling customer info to scammers or their database has been leaked, then I shut off the alias.
- Comment on Enshittification 1 week ago:
Yes, by staying privately funded and not throwing everything away chasing quarterly profits
- Comment on Does it ever make sense/is it possible to move certain docker volumes to another physical volume, but not all? 1 week ago:
Same, I don’t let Docker manage volumes for anything. If I need it to be persistent I bind mount it to a subdirectory of the container itself. It makes backups so much easier as well since you can just stop all containers, backup everything in ~/docker or wherever you put all of your compose files and volumes, and then restart them all.
- Comment on How best to store a media library in proxmox? 1 week ago:
I would separate the media and the Jellyfin image into different pools. Media would be a normal ZFS pool full of media files that gets mounted into any VM that needs it, like Jellyfin, sonarr, radarr, qbittorrent, etc. (preferably read-only mounted in Jellyfin if you’re going to expose Jellyfin to the internet).
- Comment on Sanity check: am I crazy for wanting to wipe everything and do/learn from scratch? 1 week ago:
As far as networking, from what I could see the only real change casaos was doing was mapping its dashboard to port 80, but not much more. Is there anything more I should be aware in general?
It depends on how you have things set up. If you’re just doing normal docker compose networking with port forwards then there shouldn’t be much to change, but if you’re doing anything more advanced like macvlan then you might have to set up taps on the host to be able to communicate with the container (not sure if CasaOS handles that automatically).
- Comment on Sanity check: am I crazy for wanting to wipe everything and do/learn from scratch? 1 week ago:
The nice thing about docker is all you need to do is backup your compose file, .env file, and mapped volumes, and you can easily restore on any other system. I don’t know much about CasaOS, but presumably you have the ability to stop your containers and access the filesystem to copy their config and mapped volumes elsewhere? If so this should be pretty easy. You might have some networking stuff to work out, but I suspect the rest should go smoothly and IMO would be a good move.
When self-hosting, the more you know about how things actually work, the easier it is to fix when something is acting up, and the easier it is to make known good backups and restore them.
- Comment on Qobuz reveals how much it really pays per stream, and I want to see more of this transparency to help us spend money more ethically 1 week ago:
While true, and I have a lot of DRM-free music that I’ve bought from Apple, the difference is that getting music purchased from Apple onto your computer in a usable format is a bit of a pain, and it’s all lossy. Music from Qobuz can be downloaded directly from their site after purchasing, in lossless FLAC format, and many of their albums are available in high-res 24-bit and/or 96 kHz format as well.
- Comment on [OC] mag37/dockcheck - CLI tool to automate docker image updates. 2 weeks ago:
Would you mind if I added this as a discussion (crediting you and this post!) in the github project?
Yeah that would be fine
- Comment on [OC] mag37/dockcheck - CLI tool to automate docker image updates. 2 weeks ago:
Sure, it’s a bit hack-and-slash, but not too bad. Honestly the dockcheck portion is already pretty complete, I’m not sure what all you could add to improve it. The custom plugin I’m using does nothing more than dump the array of container names with available updates to a comma-separated list in a file. In addition to that I also have a wrapper for dockcheck which does two things:
- dockcheck plugins only run when there’s at least one container with available updates, so the wrapper is used to handle cases when there are no available updates.
- Some containers aren’t handled by dockcheck because they use their own management system, two examples are bitwarden and mailcow. The wrapper script can be modified as needed to support handling those as well, but that has to be one-off since there’s no general-purpose way to handle checking for updates on containers that insist on doing things in their own custom way.
Basically there are 5 steps to the setup:
- Enable Prometheus metrics from Docker (this is just needed to get running/stopped counts, if those aren’t needed it can skipped). To do that, add the following to /etc/docker/daemon.json (create it if necessary) and restart Docker:
{ "metrics-addr": "127.0.0.1:9323" }
Once running, you should be able to run
curl http://localhost:9323/metrics
and see a dump of Prometheus metrics- Clone dockcheck, and create a custom plugin for it at dockcheck/notify.sh:
send_notification() { Updates=("$@") UpdToString=$(printf ", %s" "${Updates[@]}") UpdToString=${UpdToString:2} File=updatelist_local.txt echo -n $UpdToString > $File }
- Create a wrapper for dockcheck:
#!/bin/bash cd $(dirname $0) ./dockcheck/dockcheck.sh -mni if [[ -f updatelist_local.txt ]]; then mv updatelist_local.txt updatelist.txt else echo -n "None" > updatelist.txt fi
At this point you should be able to run your script, and at the end you’ll have the file “updatelist.txt” which will either contain a comma-separated list of all containers with available updates, or “None” if there are none. Add this script into cron to run on whatever cadence you want, I use 4 hours.
- The main Python script:
#!/usr/bin/python3 from flask import Flask, jsonify import os import time import requests import json app = Flask(__name__) # Listen addresses for docker metrics dockerurls = ['http://127.0.0.1:9323/metrics'] # Other dockerstats servers staturls = [] # File containing list of pending updates updatefile = '/path/to/updatelist.txt' @app.route('/metrics', methods=['GET']) def get_tasks(): running = 0 stopped = 0 updates = "" for url in dockerurls: response = requests.get(url) if (response.status_code == 200): for line in response.text.split("\n"): if 'engine_daemon_container_states_containers{state="running"}' in line: running += int(line.split()[1]) if 'engine_daemon_container_states_containers{state="paused"}' in line: stopped += int(line.split()[1]) if 'engine_daemon_container_states_containers{state="stopped"}' in line: stopped += int(line.split()[1]) for url in staturls: response = requests.get(url) if (response.status_code == 200): apidata = response.json() running += int(apidata['results']['running']) stopped += int(apidata['results']['stopped']) if (apidata['results']['updates'] != "None"): updates += ", " + apidata['results']['updates'] if (os.path.isfile(updatefile)): st = os.stat(updatefile) age = (time.time() - st.st_mtime) if (age < 86400): f = open(updatefile, "r") temp = f.readline() if (temp != "None"): updates += ", " + temp else: updates += ", Error" else: updates += ", Error" if not updates: updates = "None" else: updates = updates[2:] status = { 'running': running, 'stopped': stopped, 'updates': updates } return jsonify({'results': status}) if __name__ == '__main__': app.run(host='0.0.0.0')
The neat thing about this program is it’s nestable, meaning if you run steps 1-4 independently on all of your Docker servers (assuming you have more than one), then you can pick one of the machines to be the “master” and update the “staturls” variable to point to the other ones, allowing it to collect all of the data from other copies of itself into its own output. If the output of this program will only need to be accessed from localhost, you can change the host variable in app.run to 127.0.0.1 to lock it down. Once this is running, you should be able to run
curl http://localhost:5000/metrics
and see the running and stopped container counts and available updates for the current machine and any other machines you’ve added into “staturls”. You can then turn this program into a service or launch it @reboot in cron or in /etc/rc.local, whatever fits with your management style to start it up on boot.- Finally, the Homepage custom API to pull the data into the dashboard:
widget: type: customapi url: http://localhost:5000/metrics refreshInterval: 2000 display: list mappings: - field: results: running label: Running format: number - field: results: stopped label: Stopped format: number - field: results: updates label: Updates
- Comment on What are people doing for home server UPS in 2025? 2 weeks ago:
Personally, I just have a couple of cheap CyberPower UPSs for my servers. I know I know, but I’m waiting for them to get old and die before I replace them with something better. My modem, router, and primary WiFi AP are on a custom LiFePO4-based UPS that I designed and built, because I felt like it. It’ll keep them running for around 10 hours, long past everything else in the house has shut down.
- Comment on Secure Storage That Won't Die With my Server 2 weeks ago:
Anything on a separate disk can be simply remounted after reinstalling the OS. It doesn’t have to be a NAS, DAS, RAID enclosure, or anything else that’s external to the machine unless you want it to be. Actually it looks like that Beelink only supports a single NVMe disk and doesn’t have SATA, so I guess it does have to be external to the machine, but for different reasons than you’re alluding to.
- Comment on [OC] mag37/dockcheck - CLI tool to automate docker image updates. 2 weeks ago:
This is a great tool, thanks for the continued support.
Personally, I don’t actually use dockcheck to perform updates, I only use it for its update check functionality, along with a custom plugin which, in cooperation with a python script of mine, serves a REST API that lists all containers on all of my systems with available updates. That then gets pulled into homepage using their custom API function to make something like this: imgur.com/a/tAaJ6xf
So at a glance I can see any containers that have updates available, then I can hope into Dockge to actually apply them on my own schedule.