Also doing basic things like running your webserver in a VM, and you can write some script or something to just block any IP that is port scanning I’m pretty sure. I would do that if I was hosting. Also remember to block port scanning in Firefox. It’s not enabled by default. This helps to keep you safe when you land on a scanning webpage.
Comment on Do bots/scrapers check uncommon ports?
derek@infosec.pub 5 days ago
You can meaningfully portscan the entire internet in a trivial amount of time. Security by obscurity doesn’t work. You just get blindsided. Switching to a non-standard port cleans the logs up because most of the background noise targets standard ports.
It sounds like you’re doing alright so far. Trying not to get got is only part of the puzzle though. You also ought to have a backup and recovery strategy (one tactic is not a strategy). Figuring out how to turn worst-case scenarios into solvable annoyances instead of apocalypse is another (and almost equally as important). If you’re trying to increase your resiliency, and if your Disaster Recovery isn’t fully baked yet, then I’d toss effort that way.
DarkAri@lemmy.blahaj.zone 4 days ago
derek@infosec.pub 4 days ago
Absolutely. VMs and Containers are the wise sysadmin’s friends. Instead of rolling my own ip blocker I use Fail2Ban on public-facing machines. It’s invaluable.
DarkAri@lemmy.blahaj.zone 3 days ago
Cool, I have some ideas as well, like maybe write a script that hashes configuration files that needs a secret password to put into edit mode, if the config changes without being out into edit mode first, disconnect the server. Maybe use a raspberry pi that’s hidden from the network to do this. I know that wouldn’t work for large websites maybe because they can’t afford to go down for hours at a time, but it would give you an additional layer of security for sensitive stuff. I’m more into game programming but I know how exploits work and stuff. I’m pretty sure many types of things like this already exist in the market. One idea I had was pretty neat. Basically in your eula you reserve the right to hack back people that try to hack you, and you have an automated system that uses some known exploits to get a ping or maybe install a rootkit on anyone who is trying to mess around in your system. Later you can just get on and deanonymize them. This requires you actually spend time researching your own zero days. People in defcon hacking competitions do this. They are sort of masters with decompilers and hex editors.
confusedpuppy@lemmy.dbzer0.com 4 days ago
Early when I was learning self hosting, I lost my work and progress a lot. Through all that I learned how to make a really solid backup/restore system that works consistently.
Each device I own has it’s own local backup. I copy those backups to a partition on my computer dedicated to backups, and that partition gets copied again to an external SSD which can be disconnected. Restoring from external SSD to my Computer’s backup partition to each device all works to my liking. I feel quite confident with my setup. It took a lot of failure to gain that confidence.
I also spent time hardening my system. I went through this Linux hardening guide and applied what I thought would be appropriate for my web facing server. Since the guide seems more for a personal computer (I think), the majority of it didn’t apply to my use case. I also use Alpine Linux so there was even less I could do for my system but it was still helpful in understanding how much effort it is to secure a computer.
derek@infosec.pub 4 days ago
That sounds pretty good to me for self-hosted services you’re running just for you and yours. The only addition I have on the DR front is implementing an off-site backup as well. I prefer restic for file-level backups, Proxmox Backup Server for image backups (clonezilla works in a pinch), and Backblaze B2 for off-site storage. They’re reliable and reasonably priced. If a third party service isn’t in the cards then get a second SSD and put it in a safety deposit box or bury it on the other side of town or something. Swap the two backup disks once a month.
The point is to make sure you’re following the 3-2-1 principal. Three copies of your data. Two different storage mediums. One remote location (at least). If disaster strikes and your home disappears you want something to restore from rather than losing absolutely everything.
Extending your current set up to ship the external SSD’s contents out to B2 would likely just be pointing rsync at your B2 bucket and scheduling a cron or systemd timer to run it.
After that if you’re itching for more I’d suggest reading/watching some Red Team content like the stuff at hacker101 dot com and sans dot org. OWASP dot org is also building some neat educational tools. Getting a better understanding of the what and why around internet background noise and threat actor patterns is powerful.
You could also play around with Wazuh if you want to launch straight into the Blue Team weeds. Education of the attacking side is essential for us to be effective as defenders but deeper learning anywhere across the spectrum is always a good thing. Standing up a full blown SIEM XDR, for free, offers a lot of education.
P. S. I realize this is all tangential to your OP. I don’t care for the grizzled killjoys who chime in with “that’s dumb don’t do that” or similar, offer little helpful insight, and trot off arrogantly over the horizon on their high horse. I wanted to be sure I offered actionable suggestions for improvement and was tangibly helpful.
sugar_in_your_tea@sh.itjust.works 5 days ago
Exactly. Using nonstandard ports will clean up the logs a bit though, but an actual attacker doesn’t care what ports you use.