sandalbucket
@sandalbucket@lemmy.world
- Comment on How screwed would one be if their email provider shuts down? 5 days ago:
For historic emails, you could setup a forwarding rule from the primary to the backup. This would need to be done in advance of course
- Comment on Why is UI design backsliding? 1 month ago:
I love Ed. He is a fantastic writer.
- Comment on Studios are cracking down on some of the internet’s most popular pirating sites 2 months ago:
Private trackers disgust me. What kind of pirate turns away from the world, to re-seeding fragments of files they don’t care about to other cowards with slightly slower rss feeds; all for a chance at enough ratio to get the show you want? It’s a country club, with self-validating assholes, dry hot dogs, and tall fences.
The Mainline DHT is the way forward. There is no social credit here. The kids in Africa are starving, and I will throw them as much as I can, kilobyte by kilobyte, for no reason at all, for I too was a leecher once.
- Comment on Good guides for the security you need to set up for self hosting? 3 months ago:
Anything exposed to the internet will be found by the scanners. Moving ssh off of port 22 doesn’t do anything except make it less convenient for you to use. The scanners will find it, and when they do, they will try to log in.
(It’s actually pretty easy to write a little script to listen on port 20 (telnet) and collect the default login creds that the worms so kindly share)
The thing that protects you is strong authentication. Turn off password auth entirely, and generate a long keypair. Disable root login entirely.
Most self-hosted software is built by hobbyists with some goal, and rock solid authentication is generally not that goal. You should, if you can, put most things behind some reverse-proxy with a strong auth layer, like Teleport.
You will get lots of advice to hide things behind a vpn. A vpn provides centralized strong authentication. It’s a good idea, but decreases accessibility (which is part of security) - so there’s a value judgement here between the strength of a vpn and your accessibility goals.
Some of my services (ssh, wg, nginx) are open to the internet. Some are behind a reverse proxy. Some require a vpn connection, even within my own house. It depends on who it’s for - just me, technical friends, the world, or my technically-challenged parents trying to type something with a roku remote.
After strong auth, you want to think about software vulnerabilities - and you don’t have to think much, because there’s only one answer: keep your stuff up to date.
All of the above covers the P in PICERL (pick-uh-rel) for Prepare. I stands for Identify, and this is tricky. In an ideal world, you get a real-time notification (on your phone if possible) when any of these things happen:
- Any successful ssh login
- Any successful root login
- If a port starts listening that you didn’t expect
- If the system watching for these things goes down (have two systems that watch each other)
That list could be much longer, but that’s a good start.
After Identification, there’s Contain + Eradicate. In a homelab context, that’s probably a fresh re-install of the OS. Attacker persistence mechanisms are insane - once they’re in, they’re in. Reformat the disk.
R is for recover or remediate depending on who you ask. If you reformatted your disks, it stands for “rebuild”. Combine this with L (lessons learned) to rebuild differently than before.
To close out this essay though, I want to reiterate Strong Auth. If you’ve got strong auth and keep things up to date, a breach should never happen. A lot of people work very hard every day to keep the strong auth strong ;)
- Comment on Microsoft points finger at the EU for not being able to lock down Windows 3 months ago:
For the Nth time, crowdstrike circumvented the testing process
- Comment on Why is the US not considered a third world country? 4 months ago:
It’s not rocket appliances
- Comment on EU charges Microsoft with 'abusive' bundling of Teams and Office, breaching antitrust rules 4 months ago:
But MS teams is very secure! It’s sandboxed in a web browser :) It’s effectively a single-tab display of an entire ram-eating chromium process :)
The only unfortunate side effect is that it can’t read your system default audio output, so it uses a cryptographically secure random number to decide which other audio output to use. That’s right - it very securely knows about all of your audio outputs, even though they aren’t the system default :)
Did you just try to send someone a file? Don’t worry, I’ve put the file in sharepoint for you, and have sent them a link instead. Actually, wait - you had already sent that to someone else, so I sent file (1).docx instead. Actually wait - that was taken too. Now it’s file (2).docx.
I would like to provide a friendly reminder that you will need to manage the file sharing permissions in sharepoint should anyone else join this 1-on-1 direct message chat :)
- Comment on domains on internal network 4 months ago:
I strongly recommend the NAT loopback route over attempting split-horizon dns.
- Comment on Why we don't have 128-bit CPUs 4 months ago:
I think it’s a D-tier article. I wouldn’t be surprised if it was half gpt. It could have been summarized in a single paragraph, but was clearly being drawn out to make screen real-estate for the ads.
- Comment on It definitely *was* a good idea though 6 months ago:
Fortunately, diacontagious (or however you spell it) earth is not very “humane”. It cuts their wax layer as they crawl through it, leaving just enough of a gap that they can’t contain moisture, and they dehydrate / mummify to death.
This fun fact brought me much comfort while I lied in bed, slapping every itch and wincing at every breeze.
- Comment on .rar me 6 months ago:
I’ve been zipping things all day. Because it’s only one blob in the container, and then you can use website_run_from_package, which is just about the only way to get azure functions stood up via infra-as-code.
But whatever unzip thing they use sure isn’t the linux default, because it doesn’t support symlinks. And pnpm uses almost exclusively symlinks, to point to its central package store, so re-installing doesn’t take 8 years like it does with npm.
But that’s fine, because zip will follow symlinks and bake the actual files in, in place - which is pretty slick. But then azure functions package resolver can’t seem to figure out what the hell is going on, because it’s still putting dependencies in node_modules/.pnpm.
So we pass —shamefully-hoist, which is a great name for a flag, which puts all the things at the top level of node_modules, and now zip works, and azure works - but each dependency also comes with its own node_modules, with another symlink to a package that’s already at the top level. So it works, but it’s 10x bigger than it needs to be - 6.4 MB instead of 668 KB.
Fortunately we can use our build script to populate a .npmrc file, and set node-linker to hoisted, at which point pnpm will mimic npm with no symlinks at all - small, efficient, and dumb enough that the azure functions runtime can figure out how to deal with it.
It took me 4 hours to debug this mess.
All that to say, yes, a weighted blanket would be downright delightful right now, but please keep the zip files away from me