7Sea_Sailor
@7Sea_Sailor@lemmy.dbzer0.com
- Comment on Help a noob find what I'm looking for please. I have a bunch of IP addresses and I wanna give em names. 8 months ago:
Caddy and Authentik play very nicely together thanks to caddy
forward_auth
directive. Regarding acls, you’ll have to read some documentation, but it shouldnt be difficult to figure out whatsoever. The documentation and forum are great sources of info. - Comment on Help a noob find what I'm looking for please. I have a bunch of IP addresses and I wanna give em names. 8 months ago:
AdGuard Home supports static clients. Unless the instance is being used over TCP (port 53, unencrypted), it is by far the better way to use clientnames in the DNS server addresses and unblock the clients over that.
For DoT:
clientname.dns.yourdomain.com
For DoH:https://dns.yourdomain.com/dns-query/clientname
A client, especially a mobile one, can simply not guarantee always having the same IP address.
- Comment on Help a noob find what I'm looking for please. I have a bunch of IP addresses and I wanna give em names. 8 months ago:
If you dont fear using a little bit of terminal, caddy imo is the better choice. It makes SSL even more brainless (since its 100% automatic), is very easy to configure yet very powerful if you need it, doesnt require a 200 MB mysql database and does not have issues with path filtering due to UI abstractions. There are many more advantages to caddy over NPM. I have not looked back since I switched.
- Comment on My take on selfhosted photo management 8 months ago:
The demo instance would be their commercial service I suppose: ente.io. Since, as are their own words, the github code 1:1 represents the code running on their own servers, the result when selfhosting should be identical.
- Comment on My take on selfhosted photo management 8 months ago:
Theres a Dockerfile that you can use for building. It barely changes the flow of how you setup the container. Bigger issue imo is that it literally is the code they use for their premium service, meaning that all the payment stuff is in there. And I don’t know if the apps even have support for connecting to a custom instance.
- Comment on Get notified on Mastodon for new Github releases 9 months ago:
You can
docker compose up -d <service>
to (re)create only one service from your Dockerfile - Comment on Looking for a music solution 9 months ago:
I’ll plug another subsonic compatible server here: gonic. It does not have a web player ui, which saves on RAM. And it is really fast too.
- Comment on Self hosted Wetransfer? 9 months ago:
It supports sharing via public link. But I don’t think it has sharing with registered users via username.
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
Hm, I have yet to mess around with matrix. As anything fediverse, the increased complexity is a little overwhelming for me, and since I am not pulled to matrix by any communities im a part of, I wasn’t yet forced to make any decisions.
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
Are you talking about the Tailscale App or the ZeroTier app? Because the TS Android app is the one thing im somewhat unhappy about, since it does not play nice with the private DNS setting.
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
I heard about tailscale first, and haven’t yet had enough trouble to attempt a switch.
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
I use Hetzner, mainly because of their good uptime, dependable service and being geographically close to me. Its a “safe bet” if you will. Monthly cost, if we’re not counting power usage by the homelab, is about 15 bucks for all three servers.
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
That’s a tough one. I’ve pieced this all together from countless guides for each app itself, combined with tons of reddit reading.
There are some sources that I can list though:
- awesome-selfhosted.net is great to find apps you might want to host
- docs.ibracorp.io mainly aims at Unraid hosting, but the information can oftentimes be transferred
- how2host.it has some start-to-finish guides that explain every setup step
- github.com/mikeroyal/Self-Hosting-Guide is an incredibly long list of apps and ressources you can use as a launchpad. Note the “Tutorials & Ressources” Section for further links
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
I’d love to have everything centralized at home, but my net connection tends to fail a lot and I dont want critical services (AdGuard, Vaultwarden and a bunch of others that arent listed) to be running off of flakey internet, so those will remain in a datacenter. Other stuff might move around, or maybe not. Only time will tell, I’m still at the beginning of my journey after all!
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
Pretty sure ruTorrent is a typical download client. The real reason is that it came preinstalled and I never had a reason to change it ¯_(ツ)_/¯
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
Glad to have gotten you back into the grind!
My homelab runs on an N100 board I ordered on Aliexpress for ~150€, plus some 16GB Corsair DDR5 SODIMM RAM. The Main VPS is a 2 vCPU 4GB RAM machine, and the LabProxy is a 4 vCPU 4GB RAM ARM machine.
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
The rclone mount works via SSH credentials. Torrent files and tracker searches run over simple HTTPS, since both my torrent client and jackett expose public APIs for these purposes, so I can just enter the web address of these endpoints into the apps running on my homelab.
- Comment on Setting Up a Secure Tunnel Between Two Machines 9 months ago:
Allow me to cross-post my recent post about my own infrastructure, which has pretty much exactly this established: lemmy.dbzer0.com/post/13552101.
At the homelab (
A
in your case), I have tailscale running on the host and caddy in docker exposing port 8443 (though the port matters not). The external VPS (B
in your case) runs docker-less caddy and tailscale (probably also works with caddy in docker when you run it innetwork: host
mode). Caddy takes in all web requests to my domain and reverse_proxies them to the tailscale hostname of my homelab :8443. It does so with a wildcard entry (*.mydomain.com
), and it forwards everything. That way it also handles the wildcard TLS certificate for the domain. The caddy instance on the homelab then checks for specific subdomains or paths, and reverse_proxies the requests again to the targeted docker container.The original source IP is available to your local docker containers by making use of the
X-Forwarded-For
header, which caddy handles beautifully. Simply add this block at the top of your Caddyfile on server A:{ servers { trusted_proxies static 192.168.144.1/24 100.111.166.92 } }
replacing the first IP with the gateway in the docker network, and the second IP with the “virtual” IP of server A inside the tailnet. Your containers, if they’re written properly, should automatically read this value and display the real source IP in their logs.
Let me know if you have any further questions.
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
Maybe. But I’ve read some crazy stories on the web. Some nutcases go very far to ruin an online strangers day. I want to be able to share links to my infrastructure (think photos or download links), without having to worry that the underlying IP will be abused by someone who doesn’t like me for whatever reason. Maybe that’s just me, but it makes me sleep more sound at night.
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
May I present to you: Caddy but for docker and with labels so kind of like traefik but the labels are shorter 👏 github.com/lucaslorentz/caddy-docker-proxy
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
You make a good point. But I still find that directly exposing a port on my home network feels more dangerous than doing so on a remote server. I want to prevent attackers sidestepping the proxy and directly accessing the server itself, which feels more likely to allow circumventing the isolations provided by docker in case of a breach.
Judging from a couple articles I read online, if i wanted to publicly expose a port on my home network, I should also isolate the public server from the rest of the local LAN with a VLAN. For which I’d need to first replace my router, and learn a whole lot more about networking. Doing it this way, which is basically a homemade cloudflare tunnel, lets me rest easier at night.
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
its basically a VPS that comes with torrenting software preinstalled. Depending on hoster and package, you’ll be able to install all kinds of webapps on the server. Some even enable Plex/Jellyfin on the more expensive plans.
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
Nope, don’t have that yet. But since all my compose and config files are neatly organized on the file system, by domain and then by service, I tar up that entire docker dir once a week and pull it to the homelab, just in case.
How have you setup your provisioning script? Any special services or just some clever batch scripting?
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
Absolutely! To be honest, I don’t even want to have countless machines under my umbrella, and constantly have consodilation in mind - but right now, each machine fulfills a separate purpose and feels justified in itself (homelab for large data, main VPS for anything thats operation critical and cant afford power/network outages and so on). So unless I find another purpose that none of the current machines can serve, I’ll probably scale vertically instead of horizontally (is that even how you use that expression?)
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
The crowdsec agent running on my homelab (8 Cores, 16GB RAM) is currently sitting idle at 96.86MiB RAM and between 0.4 and 1.5% CPU usage. I have a separate crowdsec agent running on the Main VPS, which is a 2 vCPU 4GB RAM machine. There, it’s using 1.3% CPU and around 2.5% RAM. All in all, very manageable.
There is definitely a learning curve to it. When I first dove into the docs, I was overwhelmed by all the new terminology, and wrapping my head around it was not super straightforward. Now that I’ve had some time with it though, it’s become more and more clear. I’ve even written my own simple parsers for apps that aren’t on the hub!
What I find especially helpful are features like
explain
, which allow me to pass in logs and simulate which step of the process picks that up and how the logs are processed, which is great when trying to diagnose why something is or isn’t happening.The crowdsec agent running on my homelab is running from the docker container, and uses pretty much exactly the stock configuration. This is how the docker container is launched:
crowdsec: image: crowdsecurity/crowdsec container_name: crowdsec restart: always networks: socket-proxy: ports: - "8080:8080" environment: DOCKER_HOST: tcp://socketproxy:2375 COLLECTIONS: "schiz0phr3ne/radarr schiz0phr3ne/sonarr" BOUNCER_KEY_caddy: as8d0h109das9d0 USE_WAL: true volumes: - /mnt/user/appdata/crowdsec/db:/var/lib/crowdsec/data - /mnt/user/appdata/crowdsec/acquis:/etc/crowdsec/acquis.d - /mnt/user/appdata/crowdsec/config:/etc/crowdsec
Then there’s the Caddyfile on the LabProxy, which is where I handle banned IPs so that their traffic doesn’t even hit my homelab. This is the file:
{ crowdsec { api_url http://homelab:8080 api_key as8d0h109das9d0 ticker_interval 10s } } *.mydomain.com { tls { dns cloudflare skPTIe-qA_9H2_QnpFYaashud0as8d012qdißRwCq } encode gzip route { crowdsec reverse_proxy homelab:8443 } }
Keep in mind that the two machines are connected via tailscale, which is why I can pass in the crowdsec agent with its local hostname. If the two machines were physically separated, you’d need to expose the REST API of the agent over the web.
I hope this helps clear up some of your confusion! Let me know if you need any further help with understanding it. It only gets easier the more you interact with it!
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
Of course! here you go: files.catbox.moe/hy713z.png. The image has the raw excalidraw data embedded, so you can import it to the website like a save file and play around with the sorting if need be.
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
Oh, that! That app proxies the docker socket connections over a TCP channel. Which provides a more granular control over what app gets what access to specific functionalities of the docker socket. Directly mounting the socket into an app technically grants full root access to the host system in case of a breach, so this is the advised way to do it.
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
You’re right, that’s one of the remaining pain points of the setup. The rclone connections are all established from the homelab, so potential attackers wouldn’t have any traces of the other servers. But I’m not 100% sure if I’ve protected the local backup copy from a full deletion.
The homelab is currently using Kopia to push some of the most important data to OneDrive. From what I’ve read it works very similarly to Borg (deduplicate, chunk based, compression and encryption) so it would probably also be able to do this task? Or maybe I’ll just move all backups to Borg.
Do you happen to have a helpful opinion on Kopia vs Borg?
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
Very true! For me, that specific server was a chance to try out arm based servers. Also, I initially wanted to spin up something billed on the hour for testing, and then it was so quick to work that I just left it running.
But I’ll keep my eye out for some low spec yearly billed servers, and move sooner or later.
- Comment on After 1.5 years of learning selfhosting, this is where I'm at 9 months ago:
In addition to the other commenter and their great points, here’s some more things I like:
- ressource efficient: im running all my stuff on low end servers, and cant afford my reverse proxy to waste gigabytes of RAM (kooking at you, NPM)
- very easy syntax: the Caddyfile uses a very simple, easy to remember syntax. And the documentation is very precise and quickly tells me what to do to achieve something. I tried traefik and couldn’t handle the long, complicated tag names required to set anything up.
- plugin ecosystem: caddy is written in go, and very easy to extend. There’s tons of plugins for different functionalities, that are (mostly) well documented and easy to use. Building a custom caddy executable takes one command.