Nibodhika
@Nibodhika@lemmy.world
- Comment on Help with domain 1 day ago:
Lots of questions, let’s take it one step at a time. You have a domain, now you can point it to your public IP, so that whenever someone tries to access example.com they ask their DNS server and it replies with 10.172.172.172 (which btw is not a valid public IP). Now that request will hit your router, you need to configure your router to redirect ports 80 and 443 to 192.168.200.101, that way the request to example.com gets to your local machine.
Ok, so now you need your local machine to reply on that port, I recommend using Caddy it’s very easy to setup, but NGIX is the more traditional approach. A simple Caddy config would look like:
example.com { respond "Hello" } jellyfin.example.com { handle { reverse_proxy http://192.168.200.101:1020/ } }
So after the request reaches Caddy it will see that the person tried to access, example.com and respond with a “Hello”.
If instead you had tried jellyfin.example.com the DNS would have sent you to 10.172.172.172, your router would send that to 192.168.200.101, Caddy would then send it to 192.168.200.101:1020, which is Jellyfin so that would get returned.
There are some improvements that can be made, for example if both caddy and Jellyfin are docker you can share a network between them so Jellyfin is only exposed through caddy. Another possibly good idea is to add some authentication service like Authelia or Authentik to harden stuff a little bit. Also as you might have noticed Caddy can forward stuff to other computers, so you can have one machine on your network exposing multiple services on multiple machines.
- Comment on How to secure Jellyfin hosted over the internet? 1 day ago:
If you’re using jellyfin as the url, that’s an easily guessable name, however if you use random words not related to what’s being hosted chances are less, e.g. salmon.example.com . Also ideally your server should reply with a 200 to * subdomains so scrappers can’t tell valid from invalid domains. Also also, ideally it also sends some random data on each of those so they don’t look exactly the same. But that’s approaching paranoid levels of security.
- Comment on How to secure Jellyfin hosted over the internet? 1 day ago:
They did that with most of my subdomains
- Comment on DockGE released 1.5.0 1 day ago:
You don’t need to, as long as our stack is all in one folder you just point it to that folder and it will work
- Comment on Optimal Plex Settings for Privacy-Conscious Users 4 days ago:
I recently had a weird bug with Jellyfin, are you by chance using a domain name? Try accessing Jellyfin using direct IP, e.g. 192.168.1.123:8096
- Comment on Optimal Plex Settings for Privacy-Conscious Users 4 days ago:
There are great apps that provides a way of organizing such libraries which you should do to have stuff organized regardless of problems with JF. They’re called Sonarr for tv shows and Radarr for movies, they also provide other features, but their media organization is great
- Comment on Why don’t brands make simpler names? 5 days ago:
H is for High Performance, U is for Ultra-Low power usage. So if you want something for gaming choose an H if you want to have hours of battery life choose a U. Pretty simple and easy to st a glance see if s processor is what you’re looking for.
The 7 is not repeated on Ryzen 7 9700X, otherwise you wouldn’t have stuff like the Ryzen 5 1600X. The first 7 (or the 5 in my other example) is the segment, i.e. towards which market it’s directed, Ryzen 3 are entry levels that you should consider for your grandma, Ryzen 9 are high power CPUs. Then the first number of the 4 digits is the generation, the second one is the how it stacks up to others in it’s series, the third and fourth are extra differentiation if needed, then there’s some letters for feature flags. So for example your Ryzen 7 9700X is a high-end 9th generation high clock/performance CPU, just by that name alone I can guess that it outperforms a Ryzen 7 9500X and possibly matches a Ryzen 9 7700X. If you learn to read those it makes it very easy to figure out if an upgrade is worth it just by the model number.
USB naming convention is a mess, I’m not touching that.
Also not sure about the pro, none of my phone’s ever were pro or even had a pro version so not sure.
Sony is a bit weird, but WH-1000XM5 is a Wireless Headband (WH) 1000X is the model M5 is the generation, so those are newer than WH-1000XM4, and the next iteration of them will be called WH-1000XM6. The N is as you guessed noise canceling, the 1000X are top of the line so they have it too, no need to advertise it. I don’t know much about other products of them, but they do seem weird.
Monitor names can be very helpful, for example Dell uses [Series][Diagonal][Year][Ratio or Resolution][Features] so just by looking at a short code, for example I’m not even sure this monitor exists but a U3224QWC is an ultrawide QHD 32 inches IPS with anti-glare monitor released in 2024 with a USB-C input. That being said www.reddit.com/r/funny/…/computer_monitors/
- Comment on Plex is locking remote streaming behind a subscription in April 1 week ago:
No need to apologize, it’s a weird choice from Plex, I would have never guessed that this is how it works if I hadn’t suffered outages myself, and I’m amazed that not many people call them out on this, it seems completely against what most self-hosting people are looking for, but they seem to defend Plex with teeth and nails.
- Comment on Plex is locking remote streaming behind a subscription in April 1 week ago:
First of all I agree with most of your a, b and c points, just would like to point out that while it’s true that Docker containers provide an extra level of security they’re not as closed down as people sometimes believe, but as a general rule I agree with everything you said.
But you’re wrong about the way Plex works, this is a quote from their documentation:
So, your Plex Media Server basically “relays” the media stream through our server so that your app can access it since the app can’t connect with your server directly.
If that’s not clear enough:
Your security and privacy is important to us. When you have enabled secure connections on your Plex Media Server, then your streaming will continue to be secure and encrypted even when using our Relay feature. (When using secure connections, the content is encrypted end-to-end and tunneled through our Relay. The connection is not terminated on our servers and only your Plex Media Server has the certificate.)
So it’s very clear data is streaming through their relay server, which goes back to my original point of I expect that to be a paid feature, it’s using bandwidth from their relay servers.
As for the security again you’re wrong, authentication happens on the Plex remote server, not on your local one, which is why you can’t use Plex without internet (part of my dislike for them). So you connect to Plex remote server and authenticate there, you then get a client that’s talking to the remote server, even if someone was able to bypass that login they would be inside a Plex owned server, not yours, they would need to then exploit whatever API exists between your home server and that one to jump to your machine, so it’s an extra jump needed, again similarly to having Authelia/Authentik in front of Jellyfin.
- Comment on Plex is locking remote streaming behind a subscription in April 1 week ago:
You are, authentication on the VPS, you’re relying on Jellyfin authentication against the internet. Correct me if I’m wrong, but this is your suggested setup: [home server] Jellyfin -> [remote server] Reverse Proxy -> [remote machine] users. Let’s imagine a scenario where Jellyfin has a bug that if you leave the password empty it logs you in (I know, it’s an exaggeration but just for the sake of argument, an SQL injection or other similar attacks would be more plausible but I’m trying to keep things simple), on your setup now anyone can log into your Jellyfin and from there it’s one jump to your home server. On Plex’s solution even if Plex authentication gets compromised the attacker only got access to the remote server, and would now need to find another vulnerability to jump to your Plex at home.
Putting something like Authelia/Authentik on a VPS in front of Jellyfin is a similar approach, but the Jellyfin client can’t handle third party authentication AFAIK
- Comment on Plex is locking remote streaming behind a subscription in April 1 week ago:
For remote streaming they do, here are their docs on it …plex.tv/…/216766168-accessing-a-server-through-r…
- Comment on Plex is locking remote streaming behind a subscription in April 1 week ago:
No, the article only mentions the feature by name, the docs for the feature mentions the relay …plex.tv/…/216766168-accessing-a-server-through-r…
- Comment on Plex is locking remote streaming behind a subscription in April 1 week ago:
Using a relay server to separate online from home connection
- Comment on Plex is locking remote streaming behind a subscription in April 1 week ago:
It’s not, not directly at least, and that’s what everyone is ignoring here. You probably understand the value on Authelia/Authentik but you’re failing to see that the Plex relay server is taking that same mantle here, so even if someone managed to compromise the relay server it’s still not on your home server, whereas exposing jellyfin directly to the internet only requires one service to be compromised.
- Comment on Plex is locking remote streaming behind a subscription in April 1 week ago:
In some way is different from directly, on Plex you’re behind a relay server so it’s akin to being behind a VPS running Authentik/Authelia in front of the service on your home. Compromising the relay server does not necessarily compromises your home server, so it’s not direct like putting Jellyfin on a reverse Proxy would be.
- Comment on Plex is locking remote streaming behind a subscription in April 2 weeks ago:
That exposes Jellyfin to the internet
- Comment on Plex is locking remote streaming behind a subscription in April 2 weeks ago:
That exposes Jellyfin to the internet, so it’s not the same feature
- Comment on Plex is locking remote streaming behind a subscription in April 2 weeks ago:
That exposes Jellyfin to the internet, so it’s my option 1.
- Comment on Plex is locking remote streaming behind a subscription in April 2 weeks ago:
How do you do this on Jellyfin? The only ways I’m familiar with is to expose Jellyfin to the internet or access it through Tailscale, would love to hear alternatives.
- Comment on Plex is locking remote streaming behind a subscription in April 2 weeks ago:
Why would you expect this to NOT be paid? It requires them to be running servers to stream the media through, I wouldn’t expect this to be a free feature.
I dislike Plex for several reasons, but asking for payment for stuff that costs them money is completely justified.
- Comment on Plex is locking remote streaming behind a subscription in April 2 weeks ago:
Kodi and Plex do different things, both of them organize your media and give you a pretty interface to access it, but Kodi is a program running locally and Plex is a webservice that you can access remotely. Jellyfin is the open source program that does the same thing as Plex, i.e. a media server manager that can be accessed remotely through a web interface.
- Comment on PC gamers spend 92% of their time on older games, oh and there are apparently 908 million of us now 2 weeks ago:
I mean, Factorio’s early access is the middle point between now and when God of War 2 was released. Meaning that when Factorio was on early access God of War 2 was as old then as Factorio is now.
- Comment on Self-hosted SSO 2 weeks ago:
I tried Authelia but couldn’t set it up, so I’ve been using Authentik and have been quite happy. The only downside is that I had to configure it using the GUI instead of with config files, which I think would have been a point for Authelia, but couldn’t get it to work properly.
- Comment on When building a home server, could a used/cheap PC do the job? 2 weeks ago:
When I started my home server was an old laptop, eventually it became an old desktop, and now it’s server specific hardware. My recommendation is use whatever you have at hand unless you have specific reasons. I went from laptop to desktop because I needed more disk space, and went to specialized hardware for practical reasons (less space, less electricity, easily accessible hot swappable hard drives). But for most of the stuff I have there an old laptop would still be enough, heck, a raspberry pi would be enough for most of it.
- Comment on What host names do you use? 3 weeks ago:
I use characters from whichever book I’m reading at the time. Examples:
- Arya: From ASOIAF, a small but powerful Ultrabook
- Cthulhu: From HP Lovecraft, a huge 17" laptop
- Horus: From the Horus Heresy books, A powerful laptop
- Binky: Death’s white horse from discworld, a white desktop
- Peaches: A rat that always carries a book with her. My home server
- Comment on What is the minimum number of words needed to communicate 4 weeks ago:
Toki Pona doesn’t work like that, each word has multiple meanings, it’s made to be generic. For example Tawa means move, go, away, etc and Mi means me, we, us, mine, ours, etc. But Mi Tawa which literary means I go is used to mean Bye. Or Akesi which means disgusting animal or lizard and Linja which means long, flexible, cord, etc. So a Snake is an Akesi Linja.
- Comment on why was 1995 video games console very pixel art graphics but music was high quality and images were great??, 4 weeks ago:
I’m having trouble understanding what you mean by that. Music predates computers by a long shot, you can hear Beethoven symphonies which were composed at a time where computers didn’t even exist in science fiction. Even if you’re talking about recorded music you can get Jazz records that also predate computers. So I’m not sure what exactly is there to compare here.
I guess your question is in the lines of “why doesn’t Mario soundtrack sound like Radiohead” which is a very valid question, we clearly had the technology to record and play Radiohead music, so why not during games? The answer is simple, computers just weren’t capable of it (although in the 90s that changed but let’s start from the beginning). The computers at the time were 8 bits, this means that any value you store must be between 0 and 255, this leaves you very little sound capabilities, together with this games needed to be extremely small in order for the computers of the time to be able to run them. This severe limitations led to the aesthetic and sound of classic games, they were essentially the best graphics a computer could run at that time. You can find 8-bit versions of almost any music, which will show you an idea on how that music would have sounded like in one of those games (although that’s not exactly true because of the size limitations the music would be even worse quality).
In the 90s we made the jump to 16 bits, and that allowed a lot more sounds, voices sounded a bit garbled so they were rarely used but you can find some. But in this era you start to get music that’s closer to real music in games, take for example Sonic and compare the background music with the original Mario.
Still in the 90s we made the jump to 32bits, and then audio was no longer a problem. In this time you get games showing video and full audio, and there are games whose soundtrack were actual albums.
- Comment on Backups: Am I doing this right? 4 weeks ago:
If all you care is money, then it’s even less on hertzner at 48/year. But the reason I recommended Borgbase is because it’s a bit more known and more trustworthy. $8 a year is a very small difference, sure it will be more than that because, like you said, you won’t use the full TB on B2, but still I don’t think it’ll get that different. However there are some advantages to using a Borg based solution:
- Borg can do backup to multiple places at once, so you can have the same thing do a backup to the cloud and to some secondary disk
- Borg is an open source tool, so you can run your own Borg server, which means you can have backups sent to your desktop
- Again, because Borg is open you can run a raspberry pi with a 1TB usb disk for backup, and that would be cheaper than any solution
- Or you could even pair with a friend hosting their backup on your server and he doing the same for you.
And the most important part, migrating from one to the other is simple, just changing config, so you can start with Borgbase, and in a year buy a minicomputer to leave on your parents house and having all of the config changes needed in seconds. Whereas migrating away from B2 will involve a secondary tool. Personally I think that this flexibility is worth way more than those $8/year.
Also Borg has deduplication, versioning and cryptography, I think B2 has all of that but I’m not entirely sure, because it’s my understanding that they duplicate the entire file when some changes happen so you might end up paying lots more for it.
As for the full system backup I still think it’s not worth it, how do you plan on restoring it? You would probably have to plug a liveusb and perform the steps there, which would involve formating your disks properly, connect to the remote server and get your data, chroot into it and install a bootloader. It just seems easier to install the OS and run a script, even if you could shave off 5 minutes if everything worked correctly in the other way and you were very fast doing stuff.
Also your system is constantly changing files, which means more opportunities for files to get corrupted (a similar reason why backing up the folder of a database is a worse idea than backing um a dump of it), and some files are infinite, e.g.
/dev/zero
or/dev/urandom
, so you would need to be VERY careful around what to backup.At the end of the day I don’t think it’s worth it, how long do you think it takes you to install Linux on a machine? Because I would guess around 20 min, restoring your 1TB backup will certainly take much longer than that (probably a couple of hours) and if you have the system up you can get critical stuff that doesn’t require the full backup early. Another reason why Borg is a good idea, you can have a small critical stuff backup to restore in seconds, and another repository for the stuff that takes longer. So Immich might take a while to come back, but authentik and Caddy can be up in seconds. Again, I’m sure B2 can also do this, but probably not as intuitively.
- Comment on Backups: Am I doing this right? 4 weeks ago:
I figure the most bang for my buck right now is to set up off-site backups to a cloud provider.
Check out Borgbase, it’s very cheap and it’s an actual backup solution, so it offers some features you won’t get from Google drive or whatever you were considering using e.g. deduplication, recover data at different points in time and have the data be encrypted so there’s no way for them to access it.
I first decided to do a full-system backup in the hopes I could just restore it and immediately be up and running again. I’ve seen a lot of comments saying this is the wrong approach, although I haven’t seen anyone outline exactly why.
The vast majority of your system is the same as it would be if you install fresh, so you’re wasting backup space in storing data you can easily recover in other ways. You would only need to store changes you made to the system, e.g. which packages are installed (just get the list of packages then run an install on them, no need to backup the binaries) and which config changes you made. Plus if you’re using docker for services (which you really should) the services too are very easy to recover. So if you backup the compose file and config folders for those services (and obviously the data itself) you can get back in almost no time. Also even if you do a full system backup you would need to chroot into that system to install a bootloader, so it’s not as straightforward as you think (unless your backup is a dd of the disk, which is a bad idea for many other reasons).
I then decided I would instead cherry-pick my backup locations instead. Then I started reading about backing up databases, and it seems you can’t just back up the data directory (or file in the case of SQLite) and call it good. You need to dump them first and backup the dumps.
Yes and no. You can backup the file completely, but it’s not a good practice. The reason is that if the file gets corrupted you will lose all data, whereas if you dumped the database contents and backed that up is much less likely to corrupt. But in actuality there’s no reason why backing up the files themselves shouldn’t work (in fact when you launch a docker container it’s always an entirely new database pointed to the same data folder)
So, now I’m configuring a docker-db-backup container to back each one of them up, finding database containers and SQLite databases and configuring a backup job for each one. Then, I hope to drop all of those dumps into a single location and back that up to the cloud. This means that, if I need to rebuild, I’ll have to restore the containers’ volumes, restore the backups, bring up new containers, and then restore each container’s backup into the new database. It’s pretty far from my initial hope of being able to restore all the files and start using the newly restored system.
Am I going down the wrong path here, or is this just the best way to do it?
That seems like the safest approach. If you’re concerned about it being too much work I recommend you write a script to automate the process, or even better an Ansible playbook.
- Comment on [deleted] 5 weeks ago:
Came here to say exactly this, SOMA is an absolutely amazing game and it’s all about this question.