pyrosis
@pyrosis@lemmy.world
- Comment on FAA grounds SpaceX after rocket falls over in flames. 2 months ago:
I remember the old videos of rockets exploding on launch pads when we were first building them. We have come a long way.
I suspect they will just learn something new from this and they will last even longer.
- Comment on Best Guest VM Filesystem for NTFS Host 2 months ago:
That’s what I said. Cow on top of cow is bad. Pretty sure ext4 isn’t on option on opnsense. UFS or zfs. Which is the only reason I mentioned it at all when presented with that choice.
- Comment on Chromecast / Firestick Self Host Replacement 2 months ago:
I have been considering just installing Debian on a small PC then the jellyfin media player application set to auto start. I can think of a few different ways to get this done maybe with a couple user accounts.
I like the idea of being able to change the application that automatically starts. Maybe I want to try Kodi again. I would just change the startup app.
- Comment on Suggestions for Improving Linux Server Security: Beyond User Permissions and Groups? 2 months ago:
Get your firewall right then maybe add fail2ban.
You could also consider IDs/IPs on your primary router/firewall if this is internal. If not you can install surricata on a public server. Obviously if you go with something as powerful as surricata you no longer need fail2ban.
Keep a sharp eye on any users with sudo. Beyond that consider docker as others have mentioned.
It does add to security because it allows the developers a bit more control of what packages are utilized for their applications. It creates a more predictable environment.
- Comment on Best Guest VM Filesystem for NTFS Host 2 months ago:
It seems that way but it performs better than zfs on top of zfs. The only os I ran into that with was opnsense when I was playing with a virtualized firewall.
- Comment on Best Guest VM Filesystem for NTFS Host 2 months ago:
Within guests these days I just use XFS, UFS, or NTFS depending on the os. The hypervisor can have zfs or ceph.
- Comment on Router died - Replacement/solution recommendations 2 months ago:
I’m spoiled now. I prefer ubiquiti equipment for my network, security camera, and even door access.
However, if you prefer completely open source I can recommend opnsense and openwrt. Personally I prefer a single point of configuration… So all ubiquiti for me… It makes it easy to restore a complete network configuration as you are discovering is a pain.
Maybe start with the new cloud gateway max as a router if you are interested.
- Comment on Many Network Interfaces per VM/CT - Good Practice? 3 months ago:
When I was experimenting with this it didn’t seem like you had to distribute the cert to the service itself. As long as the internal service was an https port. The certificate management was still happening on the proxy.
The trick was more getting the host names right and targeting the proxy for the hostname resolution.
Either way IP addresses are much easier but it is nice to observe a stream being completely passed through. I’m sure it takes a load off the proxy and stabilizes connections.
- Comment on Many Network Interfaces per VM/CT - Good Practice? 3 months ago:
This would be correct if you are terminating ssl at the proxy and it’s just passing it to http. However, if you can enable SSL on the service it’s possible to take advantage of full passthru if you care about such things.
- Comment on Mirror all data on NAS A to NAS B 6 months ago:
My favorite is using the native zfs sync capabilities. Though that requires zfs and snapshots configured properly.
- Comment on Jellyfin | "We are pleased to announce the latest stable release of Jellyfin, version 10.9.0!" 6 months ago:
I noticed some updates on live video streaming. I do wonder if that will help in how jellyfin interepts commercial breaks.
Let’s say I have an m3u8 playlist with a bunch of video streams. I’ve noticed in jellyfin when they go to like a commercial the stream freaks out. It made me wonder if the player just couldn’t understand the ad insertion.
Anyway wonderful update regardless and huge improvement.
- Comment on Move UnRaid from metal to Proxmox 6 months ago:
Another thing to keep in mind with zfs is underlying vm disks will perform better if the zfs pool is a type of mirror or stripe of mirrors. Z1 Z2 type pools are better for media and files. Cm disk io will improve on the mirror type style dramatically. Just passing what I’ve learned over time in optimizing systems.
- Comment on Move UnRaid from metal to Proxmox 6 months ago:
Bookmark this if you utilize zfs at all. It will serve you well.
jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/
You will be amused with zfs performance in proxmox due to all the tuning that is possible. If this is going to be an existing zfs pool keep in mind it’s easier to just install proxmox with the zfs option and let it create a zfs rpool during setup. For the rpool tweak a couple options. Make sure ashift is at least 12 during the install or 13 if you are using some crazy fast SSD as proxdisk for the rpool.
It needs to be 12 if it’s a modern day spinner and probably a good setting for most ssds. Do not go over 12 if it’s a spinning disk.
Now beyond that you can directly import your existing zfs pool into proxmox with a single import command. Assuming you have an existing zfs pool.
In this scenario zfs would be fully maintaining disk operations for both an rpool and a media pool.
You should consider tweaking a couple things to really improve performance via the guide de I linked.
Proxmox vms/zvols live in their own dataset. Before you start getting to crazy creating vms make sure you are taking advantage of all the performance tweaks you can. By default proxmox sets a default record size for all datasets to 128k. qcow2, raw, and even zvols will benefit from record size of 64k because it tends to improve the underlying filesystem performance of things like ext4, XFS, even UFS. Imo it’s silly to create vm filesystems like btrfs if you’re vm is sitting on top of a cow filesystem.
Another huge improvement is tweaking the compression algorithm. lz4 is blazing fast and should be your default go to for zfs. The new one is pretty good but can slow things down a bit for active operations like active vm disks. So make sure your default compression is lz4 for datasets with vm disks. Honestly it’s just a good default to specify for the entire pool. You can select other compressions for datasets with more static data.
If you have a media dataset full of files like music, vids, pics. Setting a record size of 1mb will heavily improve disk io operations.
In proxmox it will default to grabbing half of your memory for arc. Make sure you change that after install. It’s a file that defines arc_max in byte number format. Set the max to something more reasonable if you have 64 gigs of memory. You can also define the arc_min
Some other huge improvements? If you are using an SSD for your proxmox install I highly recommend you install log2ram on your hypervisor. It will stop all those constant log writes on your SSD. It will also sync them to disk on a timer and shutdown/reboot. It’s also a huge performance and SSD lifespan improvement to migrate /tmp and /var/tmp to tmpfs
So many knobs to turn. I hope you have fun playing with this.
- Comment on Many Network Interfaces per VM/CT - Good Practice? 6 months ago:
I agree with this. The only vm I have that has multiple interfaces is an opnsense router vm heavily optimized for kvm to reach 10gb speeds.
One of the interfaces beyond wan and lan is an interface that links to a proxmox services bridge. It’s a proxbridge I gave to a container and is just a gateway in opnsense. It points traffic destined for services directly at the container ip. It keeps the service traffic on the bridge instead of having to hit the physical network.
- Comment on Many Network Interfaces per VM/CT - Good Practice? 6 months ago:
I use using docker networks but that’s me. They are created for every service and it’s easy to target the gateway. Just make sure DNS is correct for your hostnames.
Lately I’ve been optimizing remote services for reverse proxy passthru. Did you know that it can break streams momentarily and make your proxy work a little harder if your host names don’t match outside and in?
So in other words if you want full passthru of a tcp or udp stream to your server without the proxy breaking it then opening a new stream you would have to make sure the internal network and external network are using the same fqdn for the service you are targeting.
It actually can break passthru via sni if they don’t use the same hostname and cause a slight delay. Kinda matters for things like streaming videos. Especially if you are using a reverse proxy and the service supports quic or http2.
So a reverse proxy entry that simply passes without breaking the stream and resending it might ook like…
Obviously you would need to get the http port working on jellyfin and have ipv6 working with internal DNS in this example.
server { listen 443 ssl; listen [::]:443 ssl; # Listen on IPv6 address server_name jellyfin.example.net; ssl_certificate /path/to/ssl_certificate.crt; ssl_certificate_key /path/to/ssl_certificate.key; location / { proxy_pass https://jellyfin.example.net:8920; # Use FQDN ... } }
- Comment on Move UnRaid from metal to Proxmox 6 months ago:
Yup you can. In fact you likely should and will probably find yourself improving disk io dramatically compared to your original thoughts doing this. It’s better in my opinion to let the hypervisor manage disks operations. That means in my opinion it should also share files with smb and NFS especially if you are already considering nas type operations.
Since proxmox supports zfs out of the box along with btrfs and even XFS you have a myriad of options. You combine that with cockpit and you have a nice management interface.
I went the zfs route because I’m familiar with it and I appreciate it’s native sharing options built into the filesystem. It’s cool to have the option to create a new dataset off the pool and directly pass it into a new lxc container.
- Comment on Move UnRaid from metal to Proxmox 6 months ago:
It depends on your needs. It’s entirely possible to just format a bunch of disks as xfs and setup some mount points you hand to a union filesystem like mergerfs or whatever. Then you would just hand that to proxmox directly as a storage location. Management can absolutely vary depending how you do this.
At its heart it’s just Debian so it has all those abilities of Debian. The web UI is more tuned to vm/lxc management operations. I don’t really like the default lvm/ext4 but they do that to give access to snapshots.
I personally just imported an existing zfs pool into proxmox and configured it to my liking. I discovered options like directly passing datasets into lxc containers with lxc options like lxc.mount.entry
I recently finished optimizing my proxmox for performance in regards to disk io. It’s modified with things like log2ram, tmpfs in fstab for /tmp and /var/tmp, tcp congestion control set to cubic, a virtual opnsense heavily modified for 10gb performance, a bunch of zfs media datasets migrated to one media dataset and optimized for performance. Just so many tweaks and knobs to turn in proxmox that can increase performance. Folks even mention docker I’ve got it contained in an lxc. My active ram usage for all my services down to 7 gigs and disk io jumping .9 - 8%. That’s crazy but it just works.
- Comment on Move UnRaid from metal to Proxmox 6 months ago:
Have you considered the increase in disk io and that hypervisor prefer to be in control of all hardware? Including disks…
If you are set on proxmox consider that it can directly share your data itself. This could be made easy with cockpit and the zfs plugin. The plugin helps if you have existing pools. Both can be installed directly on proxmox and present a separate web UI with different options for system management.
The safe things here to use are the filesharing and pool management operations. Basically use the proxmox webui for everything it permits first.
Either way have fun.
- Comment on Move UnRaid from metal to Proxmox 6 months ago:
It’s the production vs development issue. My advice is the old tech advice. “If it’s not broken don’t try to fix it”
Modified into a separate proxmox development environment. Btw proxmox is perfect for this with vm and container snapshots.
When you get a vm or container in a more production ready state then you can attempt migrations. That way the users don’t kill you :)
- Comment on Move UnRaid from metal to Proxmox 6 months ago:
To most of your comment I completely agree minus the freedom for choosing different disk sizes. You absolutely can do that with btrfs or just throwing a virtual layer on top of some disks with something like mergerfs.
- Comment on m3u (iptv) client which is not Jellyfin? 6 months ago:
Music playlists are different from Plex. You can create them import them or generate an instant list.
4k is seamless and performs better imo. You can use transcoding or not if you have files they way you want them. If you do you can select on a per user basis who gets to transcode.
You can set bandwidth limits.
I’ve seen a feature to allow multi user streaming the same movie so you ig watch at the same time. I use npm and often a couple peeps might watch a movie at the same time without using this feature and works fine
I use the client app on Android and a firestick atm. I think I just downloaded it but you can side load too if you want. The media server app is available for various os. So technically you could set it up on whatever you want. Just check your app store
jellyfin.org/downloads/clients/
It can plug into homebrew or m3u playlists for live tv if that is your suggestion. It has a plugin for nextpvr and tvheadend if you utilize those for over the air or already have an m3u setup too in those pvr services. Those are great btw and available in docker containers.
It always defaulted to what I have my files encoded. It absolutely can transcode to support other clients and you decide preferences. I did notice since most of my files are h.264 with few h265 sometimes it helped to turn off transcoding for me because the client supported it natively. Jellyfin was transcoding h265 mkv to like an MP4. Anyway a quirk
Login is pretty simple. Passwords users can change. Has codes it can generate to approve a new device if you are already logged into an app on your phone. Like 6 temp numbers. Can also setup pins or whatever they call them under users.
- Comment on How do you handle family requests that you disagree with? 6 months ago:
Pretty much this it gets it’s own folder and in jellyfin it’s own library. You just give mom access to this and whatever else you want to. you unselect that library for everyone else. The setting is under users. It’s straightforward and is a check mark based select. You probably have it set to all libraries right now. Uncheck that and you can pick and choose per user.
- Comment on Mullvad VPN: Introducing Defense against AI-guided Traffic Analysis (DAITA) 6 months ago:
I doubt it would matter in some environments at all.
As an example a pc managed by a domain controller that can modify firewall rules and dhcp/dns options via group policy. At that point firewall rules can be modified.
- Comment on Mullvad VPN: Introducing Defense against AI-guided Traffic Analysis (DAITA) 6 months ago:
Of course but you don’t control rogue dhcp servers some asshat might plug in anywhere else that isn’t your network
- Comment on Mullvad VPN: Introducing Defense against AI-guided Traffic Analysis (DAITA) 6 months ago:
How about defense against dhcp option 121 changing the routing table and decloaking all VPN traffic even with your kill switch on? They got a plan for that yet? Just found this today.
- Comment on Self-hosted Jellyfin CPU or GPU for 4K HDR transcoding? 6 months ago:
Nothing but love for that project. I’ve been using docker-ce and docker-compse. I had portainer-ce but just got tired of it. It’s easier for me to just make a compose file and get things working exactly like I want.
- Comment on m3u (iptv) client which is not Jellyfin? 6 months ago:
Oh then definitely tvheadend. You can run the server lots of ways even docker. Also has plugin support.
- Comment on m3u (iptv) client which is not Jellyfin? 6 months ago:
Are you using tvheadend and their jellyfin plugin? Asking out of curiosity.
github.com/tvheadend/tvheadend
Anyway Plex and emby come to mind.
- Comment on Linux Distro for Jellyfin HTPC 6 months ago:
I’ll be honest op if it’s on a TV I use the newer fire sticks with the jellyfin app. They already have support for various codecs and stream from my server just fine. Cheap too and come with a remote.
If I were just trying to get a home made client up I would consider Debian bookworm and just utilize the Deb from the GitHub link here…
jellyfin.org/downloads/clients/
Personally I’d throw on cockpit to make remote administration a bit easier and setup an auto start at login for the jellyfin media player with the startup apps. You can even add a launch variable to launch it full screen like…
jellyfin --fullscreen
The media player doesn’t really need special privileges so you could create a basic user account just for jellyfin.
- Comment on Self-hosted Jellyfin CPU or GPU for 4K HDR transcoding? 6 months ago:
Setups for hardware decoding are based on the underlying OS. An example quite common is docker on Debian or Ubuntu. You will need to pass the appropriate /dev/ directories and at times files into your jellyfin docker container with the device environment variable. Commonly that would be /dev/dri
It gets more complicated with a vm because you are likely going to be passing the hardware directly into the vm which will prevent other devices outside the vm from using it.
You can get around this by placing docker directly on the os or placing docker in a Linux container with appropriate permissions and the same devices passed into the Linux container. In this manner system devices and other services will still have access the the video card.
All this to say it depends on your setup and where you have docker installed how you will pass the hardware into jellyfin. However jellyfin on docker will need you to pass the video card into the container with the device environment variable. Docker will need to see the device to be able to do that.