rentar42
@rentar42@kbin.social
- Comment on [Repost] Reliable alternatives to AWS Deep Glacier for ~5TB? 6 months ago:
First: love that that's a thing, but I find the blog post hilarious:
We believe this choice must include the one to migrate your data to another cloud provider or on-premises. That’s why, starting today, we’re waiving data transfer out to the internet (DTO) charges when you want to move outside of AWS.
and later
We believe in customer choice, including the choice to move your data out of AWS. The waiver on data transfer out to the internet charges also follows the direction set by the European Data Act and is available to all AWS customers around the world and from any AWS Region.
But sure: it's out of their love for customer choice that they offer this now. The fact that it also fulfills the requirements by the EDA is purely coincidental, they would have done it for sure.
Remember folks: regulation works. Sometimes corporations need the state(s) to force their hand to do the right thing.
- Comment on [Repost] Reliable alternatives to AWS Deep Glacier for ~5TB? 6 months ago:
I went with iDrive e2 https://www.idrive.com/s3-storage-e2/ 5 TB is 150$/year (50% off first year) for S3-compatible storage. My favorite part is that there are no per-request, ingress or egress costs. That cost is all there is.
- Comment on Sovereign Computing | Start9 6 months ago:
without trusting anyone.
Well, except of course the entity that gave you the hardware. And the entity that preinstalled and/or gave you the OS image. And that that entity wasn't fooled into including malicious code in some roundabout way.
like it or not, there's currently no real way to use any significant amount of computing power without trusting someone. And usually several hundreds/thousands of someones.
The best you can hope for is to focus the trust into a small number of entities that have it in their own self interest to prove worthy of that trust.
- Comment on Should I or should I not use a VLAN? I have trouble understanding the benefits for home use 6 months ago:
Like many other security mechanisms VLANs aren't really about enabling anything that can't be done about them.
Instead it's almost exclusively about FORBIDDING some kinds of interactions that are otherwise allowed by default.
So if your question is "do I need VLAN to enable any features", then the answer is no, you don't (almost certainly, I'm sure there are some weird corner cases and exceptions).
What VLANs can help you do is stop your PoE camera from talking to your KNX and your Chromecast from talking to your Switch. But why would you want that? They don't normally talk to each other anyway. Right. That "normally" is exactly the case: one major benefit of having VLANs is not just stopping "normal" phone-homes but to contain any security incidents to as small a scope as possible. Imagine if someone figured out a way to hack your switch (maybe even remotely while you're out!). That would be bad. What would be worse is if that attacker then suddenly has access to your pihole (which is password protected and the password never flies around your home network unencrypted, right?!) or your PC or your phone ...
So having separate VLANs where each one contains only devices that need to talk to each other can severely restrict the actual impact of a security issue with any of your devices.
- Comment on Lancache.net - LAN Party game caching made easy 6 months ago:
At a big enough LAN even just getting everyone to change that setting is probably harder than setting up a central cache. Don't underestimate the amount of people that listen to instructions, say sure and then just either not do it, or fail to do it correctly.
- Comment on Looking for a reverse proxy to put any service behind a login for external access. 7 months ago:
I've got the same setup! What I love about authentik is that I can even add a Google login as an authentication method. That severely increases the spouse-acceptance factor, as they don't have to "remember yet another password" or "carry around another thingie". Personally I use a YubiKey anyway, but for others who aren't into it "for fun" or for philosophical reasons reducing the friction as much as possible is paramount.
- Comment on Replacing CD Collection 8 months ago:
I've not tried that myself, but AFAIK VLC can be remote controlled in various ways, and since the API for that is open, multiple clients for it exist: https://wiki.videolan.org/Control_VLC_from_an_Android_Phone
- Comment on Docker - what use is it? 8 months ago:
https://lemmy.world/post/12995686 was a recent question and most of the answers will basically be duplicates of that.
One slight addition I want to add: "Docker" is just one implementation of "OCI containers". It's the one that broke through initially in the hype, but you can just as easily use any other (podman being a popular one) and basically all of the benefits that people ascribe to "docker" can be applied to.
So you might (as I do) have some dislike for docker (the product) and still enjoy running containers.
- Comment on Docker or podman? 8 months ago:
I personally prefer podman, due to its rootless mode being "more default" than in docker (rootless docker works, but it's basically an afterthought).
That being said: there's just so many tutorials, tools and other resources that assume docker by default that starting with docker is definitely the less cumbersome approach. It's not that podman is signficantly harder or has many big differences, but all the tutorials are basically written with docker as the first target in mind.
In my homelab the progression was docker -> rootless docker -> podman and the last step isn't fully done yet, so I'm currently running a mix of rootless docker and podman.
- Comment on Noob having fun with Self-Hosting story 8 months ago:
You've got a single, old HDD attached via USB. There's plenty of places that could be the bottleneck here, but that's among the first I'd check. Can you actually read from that HDD significantly faster than your network transfer speed? Check that locally first. No use in optimizing anything network-related when your underlying disk IO is slow.
- Comment on Noob having fun with Self-Hosting story 8 months ago:
In the immortal words of Jake the Dog:
Dude, suckin’ at something is the first step to being sorta good at something.
We are or were all noobs once. Going away from the keyboard is often an undervalued step in the solution-finding process. Kudos!
- Comment on Selfhosted photo manager kind of like Jellyfin 8 months ago:
Given the very specific dependencies that Immich has wrt. the Postgres plugins it needs, I'm certain that it's not currently packaged as an RPM and I would even bet that it never will be (at least not as one of the officially supported packages put out by the developers).
- Comment on Should I bother with HTTPS over Tailscale? 9 months ago:
Do you have any devices on your local network where the firmware hasn't been updated in the last 12 month? The answer to that is surprisingly frequently yes, because "smart device" companies are laughably bad about device security. My intercom runs some ancient Linux kernel, my frigging washing machine could be connected to WiFi and the box that controls my roller shutters hasn't gotten an update sind 2018.
Not everyone has those and one could isolate those in VLANs and use other measures, but in this day and age "my local home network is 100% secure" is far from a safe assumption.
Heck, even your router might be vulnerable...
Adding HTTPS is just another layer in your defense in depth. How many layers you are willing to put up with is up to you, but it's definitely not overkill.
- Comment on ghcr.io/linuxserver/plex vs lscr.io/linuxserver/plex 9 months ago:
They re in fact the same image, as you can verify by comparing their digest:
$ docker pull ghcr.io/linuxserver/plex Using default tag: latest latest: Pulling from linuxserver/plex Digest: sha256:476c057d677ff239d6b0b5c8e7efb2d572a705f69f9860bbe4221d5bbfdf2144 Status: Image is up to date for ghcr.io/linuxserver/plex:latest ghcr.io/linuxserver/plex:latest $ docker pull lscr.io/linuxserver/plex Using default tag: latest latest: Pulling from linuxserver/plex Digest: sha256:476c057d677ff239d6b0b5c8e7efb2d572a705f69f9860bbe4221d5bbfdf2144 Status: Image is up to date for lscr.io/linuxserver/plex:latest lscr.io/linuxserver/plex:latest $
See how both images have the digest
sha256:476c057d677ff239d6b0b5c8e7efb2d572a705f69f9860bbe4221d5bbfdf2144
. Since the digest uniquely identifies the exact content/image, that guarantees that those images are in fact byte-for-byte identical. - Comment on Noob question about PiHole 10 months ago:
The issue is that according to the spec the two DNS servers provided by DHCP are equivalent. While most clients favor the first one as the default, that's not universally the case and when and how it switches to the secondary can vary by client (and effectively appear random). So you won't be able to know for sure which client uses your DNS, especially after your DNS server was unreachable for a while for whatever reason. Personally I've "just" gotten a second Pi to run redundant copies of PiHole, but only having a single DNS server is usually fine as well.
- Comment on Should I use a dedicated DHCP/DNS server hardware 10 months ago:
Sidnenote about the PI filesystem self-clobbering: Are you running off of an SD card? Running off an external SSD is way more reliable in my experience. Even a decent USB stick tends to be better than micro-SD in the long run, but even the cheapest external SSD blows both of them out of the water. Since I switched my PIs over to that, they've never had any disk-related issues.
- Comment on How often do you back up? 10 months ago:
IMO set up a good incremental backup system with deduplication and then back up everything at least once a day as a baseline. Anything that's especially valuable can be backed up more frequently, but the price/effort of backing up at least once a day should become trivial if everything is set up correctly.
If you feel like hourly snapshots would be worth it, but too resource-intensive, then maybe replacing them with local snapshots of the file system (which are basically free, if your OS/filesystem supports them) might be reasonable. Those obviously don't protect against hardware failure, but help against accidental deletion.
- Comment on No excuse for shoplifting because UK's benefits system is very generous, policing minister says 10 months ago:
if you see anyone stealing food for themselves, then no, you didn't.
- Comment on Can I build a NAS out of a desktop? [Request] 10 months ago:
Note that there is some reliability drawback of spinning hard disks on and off repeatedly. maybe unintuitively HDDs that spin constantly can live much longer than those that spend 90% of their time spun down.
This might not be relevant if you use only SSDs, and might never affect you, but it should be mentioned.
- Comment on How can I set up a VPN that will use the client IP address for the connection? 11 months ago:
This feels like a XY problem. To be able to provide a useful answer to you, we'd need to know what exactly you're trying to achieve. What goal are you trying to achieve with the VPN and what goal are you trying to achieve by using the client IP?
- Comment on Service for letter/PDF archival 11 months ago:
Note that just because everything is digital doesn't mean something like that isn't necessary: If you depend on your service provider to keep all of your records then you will be out of luck once they ... stop liking you, go out of business, have a technical malfunction, decide they no longer want to keep any records older than X years, ...
So even in a all-digital world I'd still keep all the PDF artifacts in something like that.
And I also second the suggestion of paperless-ngx (even though I'm not using it for very long yet, but it's working great so far).
- Comment on Immich is awesome 11 months ago:
Ask yourself what your "job" in the homelab should be: do you want to manage what apps are available or do you want to be a DB admin? Because if you are sharing DB-containers between multiple applications, then you've basically signed up to checking the release notes of each release of each involved app closely to check for changes like this.
Treating "immich+postgres+redis+..." as a single unit that you deploy and upgrade together makes everything simpler at the (probably small) cost of requiring some more resources. But even on a 4GB-ram RPi that's unlikely to become the primary issue soon.
- Comment on Proper HDD clear process? 11 months ago:
There's many different ways with different performance tradeoffs. for example for my Homeland server I've set it up that I have to enter it every boot, which isn't often. But I've also set it up to run a ssh server so I can enter it remotely.
On my work laptop I simply have to enter it on each boot, but it mostly just goes into suspend.
One could also have the key on a usb stick (or better use a yubikey) and unplug that whenever is reasonable.
- Comment on Proper HDD clear process? 11 months ago:
Just FYI: the often-cited NIST-800 standard no longer recommends/requires more than a single pass of a fixed pattern to clear magnetic media. See https://nvlpubs.nist.gov/nistpubs/specialpublications/nist.sp.800-88r1.pdf for the full text. In Appendix A "Guidelines for Media Sanitation" it states:
Overwrite media by using organizationally approved software and perform verification on the
overwritten data. The Clear pattern should be at least a single write pass with a fixed data value,
such as all zeros. Multiple write passes or more complex values may optionally be used.This is the standard that pretty much birthed the "multiple passes" idea, but modern HDD technology has made that essentially unnecessary (unless you are combating nation-state-sponsored attackers, in which case you should be physically destroying anything anyway, preferably using some high-heat method).
- Comment on Proper HDD clear process? 11 months ago:
it's not much use now, but to basically avoid the entire issue just use whole disk encryption the next time. Then it's basically pre-wiped as soon as you "lose" the encryption key. Then simply deleting the partition table will present the disk as empty and there's no chance of recovering any prior content.
- Comment on how much backing up would you do of a media server? 11 months ago:
That saying also means something else (and imo more important): RAID doesn't protect against accidental or malicious deletion/modification. It only protects against data loss due to hardware fault.
If you delete stuff or overwrite it then RAID will dutifully duplicate/mirror/parity-check that action, but doesn't let you go back in time.
Thats the same reason why just syncing the data automatically to another target also isn't the same as a full backup.
- Comment on Useful apps to self-host 11 months ago:
That being said: backing up to a single, central, local location and then backing up that to some offsite location can actually be very efficient (and avoids having to spread the credentials for whatever off-site storage you use to multiple devices).
- Comment on What is the most efficient method to set up a home server? 11 months ago:
Raid 5 with 3 drives survives one dying disk. Raid 1 (mirroring) with 2 disks survives one dying disk. if either setup loses two disks all the data is gone.
When you run 3 disks then the odds of two failing are higher than if you run 2 disks.
So 3 disks are not significantly safer and might even be worse.
That being said: both setups are fine for home use, because you've set up real backups anyway, right?
- Comment on Remote solution to decrypt disk at boot 11 months ago:
I'm using encrypted ZFS as the root partition on my server and I've (mostly) followed the instructions in point #15 from here: https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bookworm%20Root%20on%20ZFS.html
This starts dropbear as an SSH server that only has a single task: when someone logs in to it they get asked for the decryption key of the root partition.
I suspect that this could be adopted to whatever encryption mechanism you use.
I didn't follow it exactly, because I didn't want the "real" SSH host keys of the host to be accessible unencrypted in the initrd, so the "locked host" has a different SSH host key than when it is fully booted, which is preferred for me.
- Comment on [deleted] 11 months ago:
You don't need a dedicated git server if you just want a simple place to store git. Simply place a git repository on your server and use
ssh://yourserver/path/to/repo
as the remote URL and you can push/pull.If you want more than that (i.e. a nice Web UI and user management, issue tracking, ...) then Gitea is a common solution, but you can even run Gitlab itself locally.