ShortN0te
@ShortN0te@lemmy.ml
- Comment on noob questions seeking non-noob answers 1 hour ago:
I think you are missing the point how easy is to fuck things up in a console
No i think you are. Why should a beginner ever even touch the CLI? You can also SSH into the synology and fuck things up.
Using a ‘friendly environment’ like synology is not gurantee to not fuck things up.
Installing truenas when having no idea about almost anything is cumbersome, dealing with the millions options (some of them incompatible between them) is frustrating, cryptic error codes are discouraging…
What millions of options? You select a drive, and set a password and your done? 1 Set fewer then on synology.
You brought up TrueNas. TrueNas for example also gives you safe boundaries and suggestions how to set up things. Same as synology. There is literally also a setup wizard for backups.
AND AGAIN just because you follow the synology wizards does not mean your data is safe either. You always can fuck things up if you want to.
- Comment on noob questions seeking non-noob answers 4 hours ago:
I see your point but in this world there is only 2 options, or you have the skills, the knowledge and the time to do it by yourself, or you need to outsource it.
But your not, outsourcing it?! You just choose a proprietary provider for a docker compose file! and some raid configuration. Everything ia still on you to fuck up.
Assuming that the op is a real noob it is clear that the 2 first prerequisites are missing making that option unacceptable, then you can only go to the buy something easy enough for the general public.
Reading the Post again from OP, its clear that OP is clearly interessted in learning those things.
And in top of that, in a homelab, the most sacred thing is the data, not the service, the data. If you misconfigure a nas or the automated backup system it could lead into the worst scenario: the data is lost forever.
The exact same ia true for you synology NAS. + the limitations on how synology thinks you should do backups vs how it actually suits you.
- Comment on noob questions seeking non-noob answers 19 hours ago:
I would absolutely discourage the use of synology and probably any other brand in the NAS realm.
Synology has pulled of some really scummy things in the last few years with their certified SSDs where only a white list of SSDs could be used in an array or when they tried to push their own HDDa and show warnings and messengers to worry the user that something is wrong. Also they retroactively removed transcoding capabilities from their systems.
Those Systems are all quite limited for how expensive they are. They are great for just simple things but with the list OP posted, you would be heavily limited and have to jump through hoops in order to have a well functioning home lab/server.
- Comment on noob questions seeking non-noob answers 19 hours ago:
I’ve heard AMD’s onboard graphics are pretty good these days, but I haven’t tried AMD CPUs on a server.
The main issue is afaik still the software support, here are NVIDIA and Intel years ahead.
The benefit of going with a dGPU is that in a few years when for example maybe AV1 takes even more off, you can just switch the GPU and you’re done and do not have to swap the whole system. That at least was my thinking on my setup. My CPU, a 3600x is still good for another 10 years probably.
- Comment on noob questions seeking non-noob answers 20 hours ago:
Do not go for server hardware, used consumer hardware is good enough for you use cases. Basically any machine from the last 5-10 yeare is powerfull enough to handle the load.
Most difficult decision is on the GPU or transcoding hardware for your jellyfin. Do you want to be power efficient? Then go with a modern but low end intel CPU there you got quicksync as transcoding engine. If not, i would go for a low end NVIDIA GPU like the 1050ti or a newer one, and for example an old AMD CPU like the 3600.
For storage, also depends on budged. Having a backup of your data is much more important then having redundancy. You do not need to backup your media, but everything that is important to you,lime the photos in immich etc.
I would go SSD since you do not need much storage, a seperate 500 GB drive for your OS and a 4 TB one for the data. This is much more compact and reduces power consumption, and especially for read heavy applications much more durable and faster inoperation, less noise etc.
Ofc, HDDs are good enough for your usecase and cheaper (factor 2.5-3x cheaper here) .
Probably 8-16 GB RAM would be more then enough.
For any local redundancy or RAID i would always go ZFS.
- Comment on Notes on full disk encryption on a Hetzner cloud VPS 6 days ago:
Yes, it is called multithreading. Just one example: github.com/BrandonBerne/masscan
- Comment on Notes on full disk encryption on a Hetzner cloud VPS 1 week ago:
Stupid me, missed the IP whitelisting part.
- Comment on Notes on full disk encryption on a Hetzner cloud VPS 1 week ago:
LUKS may not make your server meaningfully more secure. Anyone who can snapshot your server while it’s running or modify your unencrypted kernel or initrd files before you next unlock the server will be able to access your files.
This is a little oversimplified. Hardware vendors have done a lot of work in the last 10-20 years to make it hard to impossible to obtain data this way. AMD-SEV for example.
There are other more realistic attacks like simply etrackt the ssh server signature and MITM the ssh connection and extract the LUKS password.
- Comment on Notes on full disk encryption on a Hetzner cloud VPS 1 week ago:
The whole port range can be scanned in under a second. A real attack does not care if your ssh port is 22 or 69420. Changing Port is just snake oil.
- Comment on Notes on full disk encryption on a Hetzner cloud VPS 1 week ago:
use ddns or similar to keep track of tour IP?
- Comment on They Said Self-Hosting Was Hard! - arthurpizza 1 week ago:
Honestly, the time i had to manually intervene since ~2 years is less then 5-10 times, and that is way before the stable release. So I doubt that.
- Comment on How do you effectively backup your high (20+ TB) local NAS? 2 weeks ago:
That should be part of the backup configuration. You select in the backup tool of choice what you backup. When you poose your array then you download that stuff again?
- Comment on A sneaky demonstration of the dangers of curl bash 2 weeks ago:
Yes, the secrets to submit to the distribution system got compromised and therefore the system got compromised.
- Comment on A sneaky demonstration of the dangers of curl bash 2 weeks ago:
To achieve a compromised update you either need to compromise the update infrastructure AND the key or the infratstructure AND exploit the local updater to accept the invalid or forged signature.
As i said, to compromise a signature checked update over the internet you need to compromise both, the distributing infrastructure AND the key. With just either one its not possible. (Ignoring flaws in the code ofc)
- Comment on A sneaky demonstration of the dangers of curl bash 2 weeks ago:
After gaining initial access, the malicious cyber actor deployed malware that scanned the environment for sensitive credentials.
So as I said, the keys got compromised. Thats what i said in the second post.
- Comment on A sneaky demonstration of the dangers of curl bash 3 weeks ago:
No you cannot, the pub key either needs to be present on the updater or uses infrastructure that is not owned by you. Usually how most software suppliers are doing it the public key is supplied within the updater.
- Comment on A sneaky demonstration of the dangers of curl bash 3 weeks ago:
This is incorrect. If the update you download is compromised then the signature is invalid and the update fails.
To achieve a compromised update you either need to compromise the update infrastructure AND the key or the infratstructure AND exploit the local updater to accept the invalid or forged signature.
- Comment on A sneaky demonstration of the dangers of curl bash 3 weeks ago:
Not completely correct. A lot of updaters work with signatures to verify that what was downloaded is signed by the correct key.
With bash curl there is no such check in place.
So strictly speeking it is not the same.
- Comment on OpenClaw with Docker. Is it safe? 5 weeks ago:
Simple put, no. In order to be save with a LLM that can execute stuff on its own it needs to be completely sandboxed.
A very nice talk about flaws in agentic AI can be found here: …ccc.de/…/39c3-agentic-probllms-exploiting-ai-com…
- Comment on Non-US cloud storage for backup? 5 weeks ago:
I can also recommend the object storage from hetzner for backups. Quite price competitive.
- Comment on what is good remote desktop software? 1 month ago:
It actually does both. Not really tested the multimonitor features but its there and it works, not sure if to the same degree as in rdp.
- Comment on Server ROI Calculator 1 month ago:
There is a box for manually added monthly savings. But yes, hard to classify what you would actually subscribe to if you would not have a server already.
But same for video. I would never buy 3 streaking service at a time.
- Comment on How do I avoid becoming one with the botnet? 1 month ago:
The other answer is already good but I answer more general.
Rate limiting. Do not allow as many requests as your CPU can handle but limit authentication requests. Like a couple requests per second already goes a long way.
- Comment on How do I avoid becoming one with the botnet? 1 month ago:
The ‘immediate attacks’ ppl mention is just static background noise. Server / scripts that run trying to find misconfigured, highly out to date or exploitable endpoints/servers/software.
Once you update your software, set up basic brute force protection and maybe regional blocking, you do not have to worry about this kind of attack.
Much more scary are so called 0-Day attacks.
- No one will waste an expensive exploit on you
- It sometimes can happen that 0-Days that get public get widly exploited and take long time to get closed like for example log4shell was. Here is work necessary to inform yourself and disable things accorsing to what is patched and what not.
As i already said, no one will waste time on you, there are so much easier targets out there that do not follow those basic rules or actually valuable targets.
There is obviously more that you can do, like hiding everything behind a VPN or advanced thread detections. Also choosing the kind of software you want to run is relevant.
- Comment on A dummy's request for Nepenthes 2 months ago:
Yeah I’m not saying its perfect and LLMs are non-deterministic so it could give you some crap. You’re not wrong and it’s good to be aware of that. How do you verify some random stranger from the internet wasn’t an asshole and gave you malicious config? 🤷
There is no guarantee either, but on a public forum at least a couple of eyes look at it too. Not saying that this makes it trust worthy. But a LLM usually words it output very direct and saying “this is the absolut truth” which can lead to a much higher trust relation then a stranger on a forum that writes “maybe try this”.
I generelly would not recommend using the llm for potential security related questions (or important or professionally questions) were your own knowledge is not big enough to quickly vet the output.
- Comment on A dummy's request for Nepenthes 2 months ago:
You are still talking about someone that is not able to create the config themself, but that someone should be able to test everything?
- Comment on A dummy's request for Nepenthes 2 months ago:
But still, how would verify if the config is good or not? For example if it exposes root?
- Comment on New Community Rule: "No low-effort posts. This is subjective and will largely be determined by the community member reports." 3 months ago:
The discussion is about low effort Link only Video and or others Posts. If you are not reffering to them then you missed the point.
- Comment on New Community Rule: "No low-effort posts. This is subjective and will largely be determined by the community member reports." 3 months ago:
It seems, the majority does not want it.
If ppl do not like it they can use another selfhosted from another instance. Thats what lemmy or the fediverse is build for.
- Comment on New Community Rule: "No low-effort posts. This is subjective and will largely be determined by the community member reports." 3 months ago:
Most ppl seem to agree with me.