Pete90
@Pete90@feddit.de
- Comment on Traefik Docker Lables: Common Practice 8 months ago:
Thanks, I’ll let you know, once/if I figure it out!
- Comment on Traefik Docker Lables: Common Practice 8 months ago:
I did what you suggested and reduced (1) the number of running services to a minimum and (2) the networks traefik is a member of to a minmum. It didn’t change a thing. Then I opened a private browser window and saw much faster loading times. Great. I then set everything back and refreshed the private browser window: still fast. Okay. Guess it’s not Traefik after all. The final nail in the coffin for my theory: I uses two traefik instances. Homepage still loads its widgets left to right, top to bottom (the order from the yaml file). The order doesn’t correspond to the instances, it’s more or less random. So I’m assuming the slowdown has something to do with (a) either caching from traefik or (b) the way Homepage handels the API request: IP:PORT (fast) or subdomain.domain.de. Anyway, thanks for your help!
- Comment on Traefik Docker Lables: Common Practice 8 months ago:
Thank you so much for your thorough answer, this is very much a topic that needs some reading/watching for me. I’ve checked and I already use all of those headers. So in the end, from a security standpoint, not even having port 80 open would be best. Then, no one could connect unencrypted. I’ll just have to drill into my family to just use HTTPS if they have any problems.
It was interesting to see, how the hole process between browser and server works, thanks for clearing that up for me!
- Comment on Traefik Docker Lables: Common Practice 8 months ago:
If I do that, can I still connect via HTTP and the browser will then redirect? I don’t think I have a problem with remembering HTTPs, but my family will…
- Comment on Traefik Docker Lables: Common Practice 8 months ago:
That’s a great idea, I’ll give it a try tomorrow. The weird thing is, the webuis load just fine, at least 90+ of the time is almost instant…
- Comment on Traefik Docker Lables: Common Practice 8 months ago:
Each service stack (e.g. media, iso downloading) has it’s own network and traefik is in each of those networks as well. It works and seperates the stacks from each other (i don’t want stack a to be able to access stack b, which would be the case with a single traefik network, I think.)
- Submitted 8 months ago to selfhosted@lemmy.world | 12 comments
- Submitted 8 months ago to selfhosted@lemmy.world | 0 comments
- Comment on Resticity - a cross-platform frontend for restic 8 months ago:
Awesome, I’m just getting into restic!
- Comment on What does your current setup look like? 8 months ago:
Great setup! Be careful with the SSD though, Proxmox likes to eat those for fun with all those small but numerous writes. A used, small capacity enterprise SSD can be had for cheap.
- Comment on When Pi-hole is down? 9 months ago:
I tried this. Put a DNS override for Google.com for one but not the other Adguard instance. Then did a DNS lookup and the answer (ip) changed randomly form the correct one to the one I used for the override. I’m assuming the same goes for the scenario with the l public DNS as well. In any case, the response delay should be similar, since the local pi hole instance has to contact the upstream DNS server anyway.
- Comment on Feedback on Network Design and Proxmox VM Isolation 10 months ago:
Only Nextcloud if externally available so far, maybe I’ll add Vaultwarden in the future.
I would like to use a VPN, but my family is not tech literate enough for this to work reliably.
I want to protect these public facing services by using an isolated Traefik instance in conjunction with Cloudflare and Crowdsec.
- Comment on Feedback on Network Design and Proxmox VM Isolation 10 months ago:
Both. I have limited hardware for now, so I’m still using my ISP router as my WLAN AP. Not the best solution, I know, but it works and I can seperate my Home-WLAN from my Guest-WLAN easily.
I want to use an AP at some point in the future, but I’d also need a managed switch as well as the AP itself. Unfortunately, thats not in my budget for now.
- Comment on Feedback on Network Design and Proxmox VM Isolation 10 months ago:
Thank you so much for your kind words, very encouraging. I like to do some research along my tinkering, and I like to challenge myself. I don’t even work in the field, but I find it fascinating.
The ZTA is/was basically what I was aiming for. With all those replies, I’m not so sure if it is really needed. I have a NAS with my private files, a nextcloud with the same. The only really critical thing will be my Vaultwarden instance, to which I want to migrate from my current KeePass setup. And this got me thinking, on how to secure things properly.
I mostly found it easy to learn things when it comes to networking, if I disable all trafic and then watch the OPNsense logs. Oh, my PC uses this and this port to print on this interface. Cool, I’ll add that. My server needs access to the SMB port on my NAS, added. I followed this logic through, which in total got me around 25-30 firewall rules making heavy use of aliases and a handfull of floating rules.
My goal is to have the control for my networking on my OPNsense box. There, I can easily log in, watch the live log and figure out, what to allow and what not. And it’s damn satisfying to see things being blocked. No more unknown probes on my nextcloud instance (or much reduced).
The question I still haven’t answered to my satisfaction is, if I build a strict ZTA or fall back to a more relaxed approach like you outlined with your VMs. You seem knowledgable. What would you do, for a basic homelab setup (Nextcloud, Jellyfin, Vaultwarden and such)?
- Comment on Feedback on Network Design and Proxmox VM Isolation 10 months ago:
This sounds promising. If I understand correctly, you have a ton of networks declared in your proxy, each for one service. So if I have Traefik as my proxy, I’d create traefik-nextcloud, traefik-jellyfin, traefik-portainer as my networks, make them externally available and assign each service their respective network. Did I get that right?
- Comment on Feedback on Network Design and Proxmox VM Isolation 10 months ago:
Thanks for your input. Am I understanding right, that all devices in one VLAN can communicate with each other without going through a firewall? Is that best practice? I’ve read so many different opinions that it’s hard to see.
- Comment on Feedback on Network Design and Proxmox VM Isolation 10 months ago:
Ah, I did not no that. So I guess I will create several VLANs with different subnets. This works as I intended it, trafic coming from one VM has to go through OPNsense.
Now I just have to figure out, if I’m being to paranoid. Should I simply group several devices together (eg, 10=Servers, 20=PC, 30=IoT; this is what I see mostly being used) or should I sacrifice usability for a more fine grained segeration (each server gets its own VLAN). Seems overkill, now that I think about it.
- Submitted 10 months ago to selfhosted@lemmy.world | 21 comments
- Comment on Proxmox SMB Share not reaching full 2.5Gbit speed 11 months ago:
Its videos, pictures, music and other data as well. I’ll try playing around with compression today, see if disabeling helps at all. The CPU has 8C/16T and the container 2C/4T.
- Comment on Proxmox SMB Share not reaching full 2.5Gbit speed 11 months ago:
The disk is owned by to PVE host and then given to the container (not a VM) as a mount point. I could use PCIe passthrough, sure, but using a container seems to be the more efficient way.
- Comment on Proxmox SMB Share not reaching full 2.5Gbit speed 11 months ago:
I meant mega byte (I hope that’s correct I always mix them up). I transferred large videos files, both when the file system was zfs or lvm, yet different transfer speeds. The files were between 500mb to 1.5gb in size
- Comment on Proxmox SMB Share not reaching full 2.5Gbit speed 11 months ago:
I don’t think it’s the CPU as I am able to reach max speed, just not using ZFS…
- Comment on Proxmox SMB Share not reaching full 2.5Gbit speed 11 months ago:
Good point. I used fio with different block sizes:
fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --filename=/dev/sda
4K = IOPS=41.7k, BW=163MiB/s (171MB/s) 8K = IOPS=31.1k, BW=243MiB/s (254MB/s) IOPS=13.2k, BW=411MiB/s (431MB/s) 512K = IOPS=809, BW=405MiB/s (424MB/s) 1M = IOPS=454, BW=455MiB/s (477MB/s)
I’m gonna be honest though, I have no idea what to make of these values. Seemingly, the drive is capable of maxing out my network. The CPU shouldn’t be the problem, it’s a i7 10700.
- Submitted 11 months ago to selfhosted@lemmy.world | 11 comments
- Comment on Sonarr-style auto-downloader for YouTube? 11 months ago:
Tubearchivist works great for me. Downloader, database and player, all in one. Even integration with jellyfin is possible, not sure about plex though.
- Comment on Proxmox: data storage via NAS/NFS or dedicated partition 11 months ago:
Excellent, I’ll probably do that then. If I think about it, only one container needs write access so I should be good to go. User/permissions will be the same, since it’s docker and I have one user for it. Awesome!
- Comment on Proxmox: data storage via NAS/NFS or dedicated partition 11 months ago:
Ah, very good to know. Then it makes sense to use this approach. Now I only need to figure out, whether I can give my NAS access drives of other VMs, as I might want to download a copy of that data easily. I guess here might be a problem with permissions and file lock, but I’m not sure. I’ll look into this option, thanks!
- Comment on Proxmox: data storage via NAS/NFS or dedicated partition 11 months ago:
That makes sense, especially when the drives are equally old. Thanks for explaining it!
- Comment on Proxmox: data storage via NAS/NFS or dedicated partition 11 months ago:
I’m curious. Where is the problem with small drives for RAID5? Too many writes for such a small drive?
- Comment on Proxmox: data storage via NAS/NFS or dedicated partition 11 months ago:
Yeah, that is the hardest part. I don’t exactly now, how much space will be needed for each use case. But in the end, I can just copy all my data somewhere else, delete and resize to accomodate needs.