traches
@traches@sh.itjust.works
- Comment on Trine Was a Masterpiece. Why Doesn’t Anyone Remember? 2 hours ago:
Dude, the same people made nine parchments which got me and my friends through the pandemic. It’s such a good game and I don’t think we’ll ever get a sequel :(
It worked for us because you could do combo co-op: my wife and I sharing a switch at our place, friends (also a couple) on their switch at their place.
It’s a bit like a very simplified Diablo, with friendly fire. Minimal loot and a 5 color elemental system. Mostly achievement based unlocks. Has a permadeath mode where if you wipe as a party, you have to start the campaign over. Fun, whimsical art and the music ain’t bad either. My only real criticism is that they put so little effort into the plot I wonder why they bothered at all, but it does stay out of your way for the most part
- Comment on Incremental backups to optical media: tar, dar, or something else? 1 week ago:
Broadly similar from a quick glance: www.amazon.pl/s?k=m-disc+blu+ray
- Comment on Incremental backups to optical media: tar, dar, or something else? 1 week ago:
My options look like this:
allegro.pl/kategoria/nosniki-blu-ray-257291?m-dis…
Exchange rate is 3.76 PLN to 1 USD, which is actually the best I’ve seen in years
- Comment on Incremental backups to optical media: tar, dar, or something else? 1 week ago:
I only looked how zfs tracks checksums because of your suggestion! Hashing 2TB will take a minute, would be nice to avoid.
Nushell is neat, I’m using it as my login shell. Good for this kind of data-wrangling but also a pre-1.0 moving target.
- Comment on Selfhosted podcast has announced that episode 150 is their last. 1 week ago:
Tailscale deserves it, bitcoin absolutely does not
- Comment on Incremental backups to optical media: tar, dar, or something else? 1 week ago:
Where I live (not the US) I’m seeing closer to $240 per TB for M-disc. My whole archive is just a bit over 2TB, though I’m also including exported jpgs in case I can’t get a working copy of darktable that can render my edits. It’s set to save xmp sidecars on edit so I don’t bother with backing up the database.
I mostly wanted a tool to divide up the images into disk-sized chunks, and to automatically track changes to existing files, such as sidecar edits or new photos. I’m now seeing I can do both of those and still get files directly on the disk, so that’s what I’ll be doing.
I’d be careful with using SSDs for long term, offline storage. I hear they lose data if not powered for a long time. IMO metadata is small enough to just save a new copy when it changes
- Comment on Incremental backups to optical media: tar, dar, or something else? 1 week ago:
I’ve been thinking through how I’d write this. With so many files it’s probably worth using sqlite, and then I can match them up by joining on the hash. Deletions and new files can be found with different join conditions. I found a tool called ‘hashdeep’ that can checksum everything, though for incremental runs I’ll probably skip hashing if the size, times, and filename haven’t changed. I’m thinking nushell for the plumbing? It runs everywhere, though they have breaking changes frequently. Maybe rust?
ZFS checksums are done at the block level, and after compression and encryption. I don’t think they’re meant for this purpose.
- Comment on Selfhosted podcast has announced that episode 150 is their last. 1 week ago:
Aww, man, I’m conflicted here. On one hand, they seem like good dudes who deserve to eat and whose work I’ve enjoyed for years. On the other, they’re AI enthusiast crypto-bros and that’s just fucking exhausting.
- Comment on [deleted] 1 week ago:
humans are neat
- Comment on Incremental backups to optical media: tar, dar, or something else? 1 week ago:
Yeah, you’re probably right. I already bought all the stuff, though. This project is halfway vibes based; something about spinning rust just feels fragile you know?
I’m definitely moving away from the complex archive split & merge solution.
fpart
can make lists of files that add up to a given size, andfd
can find files modified since a given date. Little bit of plumbing and I’ve got incremental backups that show up as plain files & folders on a disk. - Comment on Incremental backups to optical media: tar, dar, or something else? 1 week ago:
Ohhh boy, after so many people are suggesting I do simple files directly on the disks I went back and rethought some things. I think I’m landing on a solution that does everything and doesn’t require me to manually manage all these files:
fd
(and any number of other programs) can produce lists of files that have been modified since a given date.- fpart can produce lists of files that add up to a given size.
xorrisofs
can accept lists of files to add to an iso
So if I
fd
a list of new files (or don’t for the first backup), pipe them into fpart to chunk them up, and then pass these lists into xorrisofs to create ISOs, I’ve solved almost every problem.- The disks have plain files and folders on them, no special software is needed to read them. My wife could connect a drive, pop the disk in, and the photos would be right there organized by folder.
- Incremental updates can be accomplished by keeping track of whenever the last backup was.
- The fpart lists are also a greppable index; I can use them to find particular files easily.
Downsides:
- Change detection is naive. Just mtime. Good enough?
- Renames will still produce new copies. Solution: don’t rename files. They’re organized well enough, stop messing with it.
- Deletions will be disregarded.
- There isn’t much rhyme or reason to how fpart splits up files. The first backup will be a bit chaotic. I don’t think I really care.
Honestly those downsides look quite tolerable given the benefits. Is there some software that will produce and track a checksum database?
Off to do some testing to make sure these things work like I think they do!
- Comment on Incremental backups to optical media: tar, dar, or something else? 1 week ago:
Yeah, I already use restic which is extremely similar and I don’t believe it could do this either. Both are awesome projects though
- Comment on Incremental backups to optical media: tar, dar, or something else? 1 week ago:
Hey cool, I hadn’t heard of bacula! Looks like a really robust project. I did look into tape storage, but I can’t find a tape drive for a reasonable price that doesn’t have a high jank factor (internal, 5.25" drives with weird enterprise connectors and such).
I’m digging through their docs and I can’t find anything about optical media, except for a page in the manual for an old version. Am I missing something? It seems heavly geared towards tapes.
- Comment on Incremental backups to optical media: tar, dar, or something else? 1 week ago:
Can borg back up to write-once optical media spread over multiple disks? I’m looking through their docs and I can’t find anything like that. I see an append-only mode but that seems more focused on preventing hacked clients from corrupting data on a server.
- Comment on Incremental backups to optical media: tar, dar, or something else? 1 week ago:
I’m using standard BD-DLs. M-Disks are almost triple the price, and this project is already too costly. I’m not looking for centuries of longevity, I’m using optical media because it’s read-only once written. I read that properly stored Blu-Rays should be good for 10 or 20 years, which is good enough for me. I’ll make another copy when the read errors start getting bad.
Copying files directly would work, but my library is real big and that sounds tedious. I have photos going back to the 80s and curating, tagging, and editing them is an ongoing job. (This data is saved in XMP sidecars alongside the original photos). I also won’t be encrypting or compressing them for the same reasons you mentioned.
For me, the benefit of the archive tool is to automatically split it up into disk-sized chunks. That and to automatically detect changes and save a new version; your first key doesn’t hold true for this dataset. You’re right though, I’m sacrificing accessibility for the rest of the family. I’m hoping to address this with thorough documentation and static binaries on every disk.
- Comment on Incremental backups to optical media: tar, dar, or something else? 1 week ago:
Woah, that’s cool! I didn’t know you just
zfs send
anywhere. I suppose I’d have to split it up manually withsplit
or something to get 50gb chunks?Dar has
dar_manager
which you can use to create a database of snapshots and slices that you can use to locate individual files, but honestly if I’m using this backup it’ll almost certainly be a full restore after some cataclysm. - Submitted 1 week ago to selfhosted@lemmy.world | 30 comments
- Comment on [deleted] 1 month ago:
The part I’m calling out as untrue is the „magic 8 ball” comment, because it directly contradicts my own personal lived experience. Yes it’s a lying, noisy, plagiarism machine, but its accuracy for certain kinds of questions is better than a coin flip and the wrong answers can be useful as well.
Some recent examples
- I had it write an excel formula that I didn’t know how to write, but could sanity check and test.
- Worked through some simple, testable questions about setting up project references in a typescript project
- I want to implement URL previews in a web project but I didn’t know what the standard for that is called. Every web search I could think of related to „url previews” is full of SEO garbage I don’t care about, but ChatGPT immediately gave me the correct answer (Open Graph meta tags), easily verified by searching for that and reading the public documentation.
- Naming things is a famously hard problem in programming and LLMs are pretty good at „what’s another way to say” and „what’s it called when” type questions.
Just because you don’t have the problems that LLMs solve doesn’t mean that nobody else does. And also, dude, don’t scold people on the internet. The fediverse has a reputation and it’s not entirely a good one.
- Comment on [deleted] 1 month ago:
Well that’s just blatantly false. They’re extremely useful for the initial stage of research when you’re not really sure where to begin or what to even look for. When you don’t know what you should read or even what the correct terminology is surrounding your problem. They’re “Language models”, which mean they’re halfway decent at working with language.
They’re noisy, lying plaigarism machines that have created a whole pandora’s box full of problems and are being shoved in many places where they don’t belong. That doesn’t make them useless in all circumstances.
- Comment on [deleted] 1 month ago:
Sure, but you at least have something to work with rather than whatever you know off the top of your head
- Comment on [deleted] 1 month ago:
Because it’s like a search box you can explain a problem to and get a bunch of words related to it without having to wade through blogspam, 10 year old Reddit posts, and snippy stackoverflow replies. You don’t have to post on discord and wait a day or two hoping someone will maybe come and help. Sure it is frequently wrong, but it’s often a good first step.
And no I’m not an AI bro at all, I frequently have coworkers dump AI slop in my inbox and ask me to take it seriously and I fucking hate it.
- Comment on What host names do you use? 1 month ago:
Charybdis, hippo, appa, Momo, pabu
- Comment on How can I host a small api/database accessable from a phone app as cheap/easily as possible? 1 month ago:
If you want to self-host, I recommend a used business thin client, docker + docker-compose, and Tailscale for access away from home if needed. Don’t forget to dump & back up nightly.
Or you could use hosted services, neon.tech and turso both offer really generous free tiers for SQL databases.
Or you could use a notebook and pen. Sometimes simplicity is king.
- Comment on Any nice playbook or tutorial to host a static website from home? 1 month ago:
The trickier part here his connecting your domain to your raspberry pi and allowing the big internet to access it. You have a few options:
- Set up dynamic DNS to direct your domain name to your (presumably dynamic) home IP address. Forward ports 80 and 443 to the rpi. The world knows your home IP address, and you’re dependent on your router for security. No spam or DDOS protection.
- Use a service such as cloudflare tunnel. You’re dependent on cloudflare or whoever, but it’s an easier config, you don’t need to open ports in your firewall, and your home IP address is not public. (I recommend this option.)
- Comment on Which reverse proxy do you use/recommend? 2 months ago:
I’ve been using caddyserver for awhile and love it. Config is nicely readable and the defaults are very good.
- Comment on What are your Homelab goals for 2025? 3 months ago:
Got a 3 year old kid with another on the way. I just need it to be reliable so the kid can watch Sesame Street and the lights keep working.
- Comment on Help with training plan 5 months ago:
Seems fine, but you’re sorta hitting two fields at once. Application development (coding) is a different skill set from devops/deployment (docker). I’d stay pretty surface level on docker and the CLI for now and focus on building your app. You’ll know when you need to go off and learn those things.