darkan15
@darkan15@lemmy.world
- Comment on This is another implementation of what's possible inside of termux for all you self hosters. 5 days ago:
The TL, DR version of sharing with of No License, is that technically speaking you are not explicitly permitting others to use your code in any way, just allowing them to look, a license is a formal way to give permissions to others to copy, modify, or use your code.
You don’t need an extra file for the license, you can embed it on a section at the top of your file, as you did with the description, just add a
# License
section at the very top, if you want the most permissive one you can just use MIT, just need to replace the year of publication of the code and a username or email that can identify you as the author - Comment on This is another implementation of what's possible inside of termux for all you self hosters. 5 days ago:
Just wondering, as this is the second post I see you do like this, why not use git and a forge (codeberg, gitlab, github), to publish these projects, with proper file separation, a nice README with descriptions and instructions and a proper OSS license?
- Comment on Spare mini PCs? What would you do with them? 1 week ago:
You don’t need to backup all your 24TB of data, you can have a copy of a subset of your important data on another device, if possible the best would be a 3-2-1 approach.
“RAID is not a backup”, is something that is mentioned a lot, as you can still lose data on a RAID setup.
- Comment on Spare mini PCs? What would you do with them? 1 week ago:
Secondary/Failover DNS or any other service that would be nice to have running when the main server is down for any reason.
- Comment on issues setting up nginx as an https proxy 2 weeks ago:
On your first part, clarifying your intent, I think that you are overcomplicating yourself by expecting traffic to come to the server via domain name (pass through proxy) from
Router A
network and byIP:Port
fromRouter B
network, you can access all, from anywhere through domains and subdomains, and avoid using numbers.If you can’t set up a DNS directly on
Router A
, you can set it per device you would want to access the server through port forwarding ofRouter B
, meaning setting the laptop to use itself as primary DNS and as secondary use external, and any other device you would want in that LAN do the same, It is a bit tedious to do per device instead but still possible.Wouldn’t this link to the 192.168.0.y address of router B pass through router A, and loop back to router B, routing through the slower cable? Or is the router smart enough to realize he’s just talking to itself and just cut out `router A from the traffic?
No, the request would stop on
Router B
, and maintain all traffic, on the 10.0.0.* network it would not change subnets, or anything.Remember that all my advice so far is so you don’t use any IP or Port anywhere, and your experience is seamless on any device using domains, and subdomains, the only place where you would need to put IP or ports, is on the reverse proxy itself, to tell anything reaching it, where the specific app/service is, as those would need to be running on different ports but be reached through the reverse proxy on defaults 80 or 443, so that you don’t have to put numbers anywhere.
- Comment on issues setting up nginx as an https proxy 2 weeks ago:
If you decide on doing the secondary DNS on the server on
Router B
network, there is no need to loop back, as the secondary DNS will maintain domain lookup and the requests on10.0.0.x
all internal toRouter B
network.You can still decide to put rules on the reverse proxy if the origin IP is from 192.168.0.x if you see the need to differentiate traffic.
- Comment on issues setting up nginx as an https proxy 2 weeks ago:
Do yourself a favor and use the default ports for HTTP(80), HTTPS(443) or DNS(53)
That way, you can do URLs like app1.home.internal and app2.home.internal without having to add ports on anything outside the reverse proxy.
You could run only one DNS on the laptop connected to Router A (External, connected to internet), and point the domain to Router B (Internal, Connected to Router A, has a WAN IP of 192.168.0.y and Internal IP of 10.0.0.1), redirect for example the domain home.internal or home.lan (recommend better home.internal as it is the intended one to use by convention), to the 192.168.0.y IP, and it will redirect all devices to the server by port forwarding.
If Router B has Port Forwarding of Ports 80 and 443 to the Server 10.0.0.114 all the request are going to reach, no matter the LAN they are from. The devices connected to router A will reach the server thanks to port forwarding, and the devices on Router B can reach anything connected to Router A Network 192.168.0.*, they will make an extra hop but still reach.
Both routers would have to point the primary DNS to the Laptop IP 192.168.0.x (should be a static IP), and secondary to either Cloudflare 1.1.1.1 or Google 8.8.8.8.
That setup would be dependent on having the laptop or another device always turned ON and connected to Router A network to have that DNS working, you could run a second DNS on the server for only the 10.0.0.* LAN, but that would not be reachable from Router A or the Laptop, or any device on that outer LAN, only for devices directly connected to Router B, and the only change would be to change the primary DNS on Router B to the Server IP 10.0.0.114 to use that secondary DNS.
Lots of information, be sure to read slowly and separate steps to handle them one by one, but this should be the final setup, considering the information you have given.
You should be able to setup the certificates and the reverse proxy using subdomains without much trouble, only using IP:PORT on the reverse proxy.
- Comment on issues setting up nginx as an https proxy 2 weeks ago:
Most routers, or devices, let you set up at least a primary and secondary DNS resolver (some let you add more), so you could have your local one as primary and an external like google or Cloudflare secondary. That way, if your local DNS resolver is down, it will directly go to the external one.
Still. Thanks for the tips. I’ll update the post with the solution once I figure it out.
You are welcome.
- Comment on issues setting up nginx as an https proxy 2 weeks ago:
Should not be an issue to have everything internally, you can setup a local DNS resolver, and config the device that handles your DHCP (router or other) to set that as the default DNS for any devices on your network.
To give you some options if you want to investigate, there is: dnsmasq, Technitium, Pi-Hole, Adguard Home. They can resolve external DNS queries, and also do domain redirection to handle your internal only domain and redirect to the device with your reverse proxy.
That way, you can have a local domain like
domain.lan
ordomain.internal
that only works and is managed on your Internal network. And can use subdomains as well. - Comment on issues setting up nginx as an https proxy 2 weeks ago:
Not all services/apps work well with subdirectories through a reverse proxy.
Some services/apps have a config option to add a prefix to all paths on their side to help with it.
But if you need to do some kind of path rewrite on the reverse proxy side only, to add/change a segment of the path, there can be issues if all path changes are not going through the proxy, an example of this is with PWA that when you click a link that should change the path, don’t reload the page (the action that would force a load that goes through the reverse proxy and that way trigger the rewrite), but instead use JavaScript to rewrite the path text locally and do DOM manipulation without triggering a page load.
To be honest, the best way out of this headache is to use subdomains instead of subdirectories, it is the standard used these days to avoid the need to do path rewrite magic that doesn’t work in a bunch of situations.
Yes, it could be annoying to handle SSL certificates if you don’t want or can’t issue wildcard certificates, but if you can do a cert with both maindomain.tld and *.maindomain.tld then you don’t need to touch that anymore and can use the same certificate for any service/app you could want to host behind the reverse proxy.
- Comment on If I use Caddy for reverse-proxying into another local machine... is my local connection not HTTPS? 3 weeks ago:
If your concern is IoT devices, TVs, and the like sniffing on your local traffic, there are alternatives, and some of them are:
- https from reverse proxy to service.
- VLANs or Different LANs for IoT and your trusted devices (I do this one).
- Internal VPN connection between devices (like WireGuard), so the communication between selected devices is encrypted.
- Comment on What is the easiest way to have a self hosted git server? 3 weeks ago:
The simplest (really the simplest) would be to do a
git init --bare
in a directory on one machine, and that way you can push or pull from it, with the directory path as URL from the same machine and using ssh from the other (you could do this bare repo inside a container but really would be complicating it), you would have to init a new bare repo per project in a new directory.If a self-hosted server meaning something with a web UI to handle multiple repositories with pull requests, issues, etc. like your own local Github/Gitlab. The answer is forgejo (this link has the instructions to deploy with docker), and if you want to see how that looks like there is an online public instance called codeberg where the forgejo code is hosted, alongside other projects.
- Comment on If you miss old network multiplayer games, or would like to try them with your friends for the first time, may I suggest setting them up via SoftEtherVPN? 4 weeks ago:
I don’t know if SoftEther has an option so you don’t tunnel everything, and just use the virtual LAN IPs for games, file transfers, etc.
And I don’t know your actual technical level or the people you play with, but, for people that can go as far as opening ports and installing a server, and getting others to connect to it, I would suggest Headscale (the free self-hosted version of Tailscale) as a next step, or if inclined to learn something a bit more hands on Wireguard.
With those you can configure it so, only the desired traffic goes through (like games or files sharing using the virtual LAN IP), and the rest goes out normally, or configure exit nodes, so if/when desired, all traffic is tunneled like what you have now.
- Comment on Docker dashboards: choice overload 4 weeks ago:
This would be my choice as well, as I went with Dockge exactly because it works with your existing docker-compose files, and there are no issues if you manage with either Dockge or with the terminal.
- Comment on First Time Self Hoster- Need help with Radicale 4 weeks ago:
But I think I’m understanding a bit! I need to literally create a file named “/etc/radicale/config”.
Yes, you will need to create that
config
file, on one of those paths so you then continue with any of the configuration steps on the documentation, you can do thatAddresses
step first.A second file for the users is needed as well, that I would guess the best location would be
/etc/radicale/users
For the Authentication part, you will need to install the
apache2-utils
package withsudo apt-get install apache2-utils
to use thehtpasswd
command to add usersSo the command to add users would be
htpasswd -5 -c /etc/radicale/users user1
and instead of user1, your username.And what you need to add to the config file for it to read your user file would be:
[auth] type = htpasswd htpasswd_filename = /etc/radicale/users htpasswd_encryption = autodetect
Replacing the path with the one where you created your users file.
- Comment on First Time Self Hoster- Need help with Radicale 4 weeks ago:
I’m trying to follow the tutorial on the radicale website but am getting stuck in the “addresses” part.
From reading from the link you provided, you have to create a config file on one of two locations if they don’t exist:
“Radicale tries to load configuration files from
/etc/radicale/config
and~/.config/radicale/config
”after that, add what the
Addresses
sections says to the file:[server] hosts = 0.0.0.0:5232, [::]:5232
And then start/restart Rradicale.
You should be able to access from another device with the IP of the Pi and the port after that
- Comment on how to start with self-hosting? 5 weeks ago:
Yeah, I started the same, hosting LAN parties with Minecraft and Counter Strike 1.6 servers on my own Windows machine.
But what happens when you want to install some app/service that doesn’t have a native binary installer for your OS, you will not only have to learn how to configure/manage said app/service, you will also need to learn one or multiple additional layers.
I could have said “simple bare metal OS and a binary installer” and for some people it would sound as Alien, and others would be nitpicky about it as they are with me saying docker (not seeing that this terminology I used was not for a newbie but for them), If the apps you want to self-host are offered with things like Yunohost or CasaOS, that’s great, and there are apps/services that can be installed directly on your OS without much trouble, that’s also great. But there are cases where you will need to learn something extra.
- Comment on how to start with self-hosting? 5 weeks ago:
XKCD 2501 applies in this thread.
I agree, there are so many layers of complexity in self-hosting, that most of us tend to forget, when the most basic thing would be a simple bare metal OS and Docker
you’ll probably want to upgrade the ram soon
His hardware has a max ram limit of 4, so the only probable upgrade he could do is a SATA SSD, even so I’m running around 15 docker containers on similar specs so as a starting point is totally fine.
- Comment on how to start with self-hosting? 5 weeks ago:
I get your point, and know it has its merits, I would actually recommend Proxmox for a later stage when you are familiar with handling the basics of a server, and also if you have hardware that can properly handle virtualization, for OP that has a machine that is fairly old and low specs, and also is a newbie, I think fewer layers of complexity would be a better starting point, and then in the future they can build on top of that.
- Comment on how to start with self-hosting? 5 weeks ago:
I have a Dell Inspiron 1545, that has similar specs to yours running Debian with Docker and around 15 services in containers, so my recommendation would be to run Debian server (with no DE), install docker, and start from there.
I would not recommend proxmox or virtual machines to a newbie, and would instead recommend running stuff on a bare metal installation of Debian.
There are a bunch of alternatives to manage and ease the management of apps you could choose from like, yunohost, casaOS, Yacht, Cosmos Cloud, cockpit, etc. that you can check out and use on top of Debian if you prefer, but I would still recommend spending time on learning how to do stuff yourself directly with Docker (using docker compose files), and you can use something like Portainer or Dockge to help you manage your containers.
My last recommendation would be that when you are testing and trying stuff, don’t put your only copy of important data on the server, in case something break you will lose it. Invest time on learning how to properly backup/sync/restore your data so you have a safety net in case that something happens, you have a way to recover.
- Comment on Setting up 2FAuth; Can't Register 1 month ago:
I have no experience with this app in particular, but most of the time there is an issue like this that you can’t reach an app or any other path besides the index, is because the app itself doesn’t work well with path redirection of subfolders, meaning the app expects paths to be something like
domain.tld/index.html
instead ofdomain.tld/subfolder/index.html
for all its routes.Some apps let you add a prefix to all its routes they can work, so you not only have to configure nginx but the app itself to work with the same subfolder, some other apps will work with the right configuration in nginx if they do a new full page load every time the page changes its path/route, but some apps like PWA that don’t do a page load every time the path is changed are not going to work with subfolders as they don’t do any page refresh that goes through nginx, and just rewrite the visible url on the browser
I don’t have the knowledge to help you troubleshoot this specific app, but what I can recommend is to switch to a subdomain like
2fa.domain.tld
instead of a subfolder and test if it works, as subdomains are the modern standard for this kind of things these days, to avoid this type of issues. - Comment on selfh.st: improper etiquette by 2010 standards? (trackers, no RSS) Thoughts? 1 month ago:
There is an update on the RSS situation of selfh.st; TL, DR: seems to be related to ways to monetize, so now it’s available to paid subscription, but for free have to visit site to read.
- Comment on [Help request] How do I go about debugging my router? 2 months ago:
Traceroute can be a good hint, another way to confirm is on your router config interface, there should an IP address, subnet and gateway it connects to, with these values you could also verify it depending on what IP ranges it shows.
- Comment on [Help request] How do I go about debugging my router? 2 months ago:
Well, if you are forwarding the ports from your home router, and you can’t reach it’s the most probable cause, if you are, that means that there is no public IP reaching your home router.
You could contact your ISP and confirm if this is the case, they could offer to assign a public IP for an extra fee, your only other option is to rent a cheap VPS and tunnel traffic between it and your home, but at this point you could also decide to host stuff on the VPS.
- Comment on [Help request] How do I go about debugging my router? 2 months ago:
If your ISP (Internet service Provider) doesn’t have you behind CGNAT or Double NAT (meaning that multiple homes share the same public IP), some ISP block the first block of 1024 ports, so any port below that number is blocked.
If the problem is that ports below 1024 are blocked, but you do have a public IP reaching your home router, you could contact your ISP so they unblock these ports for you (I had to do that once, so at least with my ISP it was as simple as asking).
The way you could test if your public IP reaches your home router is by exposing something on a higher port than 1024 like let’s say 8080, if you can reach a simple web or caddy or any other service from 8080, you can at least confirm, that is the issue.
Be aware that most ISP even if they assign a single IP per house, this IP can be dynamic and can rotate on a regular basis, like daily or weekly
- Comment on Self-hosted blog - do I need a static IP address? 2 months ago:
As others have already commented, what you need is a Dynamic DNS service, where you register a subdomain, and setup a small program or script on your computer that pings the DDNS server every few minutes, that way you leave that running on the background, and if the program detects that the IP with the request changes, it will update the subdomain to point to it automatically.
If you want a recommendation, I have been using DuckDNS for years, and it has been pretty reliable.
- Comment on What's up, selfhosters? It's self hosting Sunday! 2 months ago:
what is a good solution to keep a music folder backed up
syncthing (file sync, update: removed this, not needed, actually need a backup solution)
Backup solution, you could use Borg or Restic, they are CLI, but there are also GUI for them
how can I back up my Docker setup in case I screw it up and need to set it all up again?
learn to use Dockage to replace Portainer (done, happy with this)
If you did the switch to Dockge, it might be because you prefer having your docker compose files accessible easily on the filesystem, the question is if you have the persistent data of your containers in bind mounts as well, so they are easy to backup.
I have a git repo of my stacks folder, with all my docker compose files (secrets on env files that are ignored), so that I can track all changes made to them.
Also, I have a script that stops every container while I’m sleeping and triggers backups of the stacks folder and all my bind mount folders, that way I have a daily/weekly backup of all my stuff, and in case something breaks, I can roll back from any of these backups and just docker compose up, and I’m back on track.
An important step, is to frequently check that backups are good, I do this by stopping my main service and running from a different folder with the backed up compose file and bind mounts
- Comment on Recommendations for a version control system 2 months ago:
Used Gitea for a while, and decided to switch to Forgejo before the hard fork split (no more code from Gitea), been using it since, In my opinion both work well, but prefer Forgejo.
- Comment on Upgrading Paperless-ngx several revisions behind 2 months ago:
Having the ability to shut down the main instance of an app and run a secondary instance from backups without much hassle is the best feeling ever, I recently updated from Nextcloud v26 to v31, and having the ability to just go back to a working version if anything was wrong saved me from so much stress.
- Comment on Upgrading Paperless-ngx several revisions behind 2 months ago:
Yeah, these are pretty solid advice, would say that you should be safe with patch version updates, like from 1.17.1 to 1.17.4
Should be able to jump from 1.17.4 to 2.0.1 and from 2.0.1 to 2.1.3, etc. going straight to the last patch of the next version, but should go one by one minor version, paying close attention to those versions that have breaking changes in the release notes. And always backup and test before each version jump.