citizen
@citizen@sh.itjust.works
- Comment on Longhorn overkill for RAID ? 11 months ago:
Sounds like NFS is a way to go in your case. Longhorn adds for your setup unnecessary resource overhead and complexity.
- Comment on Do you run a private CA? Could you tell me about your certificate setup if you do? 11 months ago:
I’m using step-ca. Its running on dedicated SBC. ACME certs created for each service renewing automatically daily. Honestly this setup wouldn’t be worth it if it wasn’t for daily cert rotation. I’m not using wildcard certs with own CA as it’s bad practice and defeats the purpose. I’m validating certs with DNS using TSIG. Step-ca have several integrations with different DNS services. I chose TSIG because it’s universal. There is pi-hole integration if you using that. Buying valid domain is not needed as long as you have internal DNS. You need to Install root Ca on every machine that will be connecting to services. If you have many VM’s configuration management is the way to go.
- Comment on Need help setting up local SSL certificates? 11 months ago:
Yeah I looked at tutorial. Port 81 is only for management (NPM admin gui). Then you have your traffic ports for proxy services. Those would be 80 and 443 normally. You would need to expose those ports to the Internet if you want to access NPM/proxy your service. Port 81 shouldn’t be exposed on your public interface make sure it isn’t or at least have firewall rule to allow only local network (ideally management network/vlan)
- Comment on Need help setting up local SSL certificates? 11 months ago:
It’s not clear what’s the purpose of NPM in your case. Do you want to serve internal network or expose to Internet. If it’s the latter, you need to see what interface you exposed NPM port on (have to be your public network - VPS IP), your firewall needs to allow incoming connections on that port. Most likely you will be using port 443 and maybe 80 for redirect (checkbox in NPM always use TLS).
OpenSSL command needs to be executed from VPS to eliminate network issues and just validate certificate setup. The IP and port would depend on what port you exposed. 127.0.0.1 should work from that context. Once you see certificate you can execute openssl command from your local and use WireGuard tunnel IP to connect to service. This is for internal network.
- Comment on Need help setting up local SSL certificates? 11 months ago:
Can you elaborate more on what is not working? What are you testing to conclude it’s not working? From my understanding you’re running VPS server. You have tunnel setup to connect to the server. You’re trying to setup N.P.M. with let’s encrypt certs validating via DNS.
To continue troubleshooting you should eliminate all network paths and test from the VPS (ssh to the system). Once you have NPM setup you should be able to test certificate locally connecting to NPM exposed port. Assuming you exposed port 443
openssl s_client -connect 127.0.0.1:443 -showcerts
If you can validate that NPM is serving endpoint with the correct certificate you can move on to troubleshooting your network path.
- Comment on Seeking assistance getting AntennaPod, Podfetch, and GPodder to work together. 11 months ago:
iOS is still in beta available through TestFlight.
- Comment on Seeking assistance getting AntennaPod, Podfetch, and GPodder to work together. 11 months ago:
Not what you’re asking but want to share what I’m using with great success. Audiobookshelf has podcasts feature and it’s all in one solution for downloading (it auto downloads on chosen schedule) and playing on multiple devices with sync.
- Comment on Docker Compose Issue with Stale Data on Startup 11 months ago:
Have you tried clearing build cache? That docker compose has build init container. Latest version docker compose have —no-cache option
- Comment on Cross-container/vm communication security on Proxmox 1 year ago:
If your goal is to improve security you would have to look into e2e encryption. This means network traffic needs to be encrypted both between client and proxy as well as between proxy and service. You didn’t elaborate on your proxmox/network setup. I will assume that you have multiple proxmox hosts and external router perhaps with switch between them. Traffic this way flows between multiple devices.
Some solutions
- You could run another proxy on same VM as service just to encrypt traffic if service doesn’t support that. Then have your proxy connect to that proxy instead of service directly. This way unencrypted traffic doesn’t leave VM. Step up would be to use certificate validation. Step up from there would be to use internal certificate authority and issue certificates from there as well as validate using CA cert.
- Another alternative is to use overlay network between proxy and VM. There are bunch of different options. There are more advanced projects combining zero trust concepts like nebula.
- if you start building advanced overlay networks you may as well look at kubernetes as it streamlines deployment of both services and underlying infrastructure. You could deploy calico with wire guard network. Setup gets more complicated for simple home lab. All boils down to why you do self hosting. If it’s to learn tech then go for it all the way. If you want to focus on reliability and simplicity don’t overcomplicate things. Many people run everything on single node just running docker and docker networks between services to separate internal services from proxy traffic.
- Comment on Is anyone successfully running Nginx in opnsense? 1 year ago:
The nice thing about vm with nginx proxy manager or just nginx running on the same host as the rest(or majority) of vms is that internal traffic doesn’t traverse other devices. It looks more secure setup this way. That being said chances of some packet sniffer running on your network between proxy and destination VM is low.
I’m in similar situation as you. I run overpowered router that barely sees any CPU usage.
I haven’t tried nginx but I’m running HA proxy on opnsense router. I saw in community forums most people use that. After going through tutorial for one service it’s pretty easy to grasp configuration concept and replicate for other services. I think only one confusing option is that backends pools can have backends configured and you can have only one in use. Test syntax button ensures you don’t make mistakes. HA proxy has powerful options for backend more than you probably need. I moved router management port to higher number and setup proxy to run on 443. Then wildcard DNS entry points to router and that allows to keep adding services as needed.
- Comment on What Self-Hosted Single Sign-On (SSO) do you use? 1 year ago:
I started integrating Authentik lately based on seeing people recommend it. It has pretty steep learning curve. I had to follow tutorials and even then each integration have its own quirks. I got stuck on integrating my internal e-mail server with ldap provider (via authentik). It’s definitely capable but it’s a project to integrate all services.
- Comment on Two Nextcloud instances for security? 1 year ago:
Here is my security point of view. Second instance would be too much overhead for just one use case of sharing file. You have to decide how comfortable you are with exposing anything in your private network. I would personally not expose Nextcloud instance because it’s complex application with many modules each possibly having 0day exploits. If your goal is to share a file and selfhost I would look into dedicated apps for that purpose. You can setup simple microbin/privatebin on dedicated hardware in DMZ network behind firewall. You should run IDS/IPS on your open ports (pfsense/opnsense have that nicely pairs with crowdsec). You could also look into cloud fare tunnels to expose your dedicated file sharing app but I would still use as much isolation as possibilities (ideally phisical hardware) so that it would be not easy to compromise your local network in event of breach. Regardless selfhosted solution will always pose risks and management overhead if you want to run a tight setup. It’s much easier to use public cloud solution. For example proton drive is encrypted and you can share files via links with people.