NixOS for the win! Define your system and services, run a single command, get a reproducible, Proxmox-compatible VM out of it. Nixpkgs has basically every service you’d ever want to selfhost.
Comment on Getting worn out with all these docker images and CLI hosted apps
Pika@sh.itjust.works 3 weeks ago
I’m sick of everything moving to a docker image myself. I understand on a standard setup the isolation is nice, but I use Proxmox and would love to be able to actually use its isolation capabilities and already have the isolation. The enviroment is already suited for the program. Just give me a standard installed for the love of tech.
smiletolerantly@awful.systems 3 weeks ago
slazer2au@lemmy.world 3 weeks ago
I thought that was the point of supporting OCI in the latest version so you can pull docker images and run them like an lxc container
Pika@sh.itjust.works 2 weeks ago
If there’s a way of pulling a Docker container and running it directly as a CT on Proxmox, please fill me in. I’ve been using it for a year and a half to two years now, but I haven’t seen any ability to directly use a Docker container as an LXC.
EncryptKeeper@lemmy.world 2 weeks ago
This was added in Proxmox 9.1
Pika@sh.itjust.works 2 weeks ago
Will be looking into that, I haven’t upgraded from 8.4 yet. That sounds like a pretty decent thing to have.
WhyJiffie@sh.itjust.works 2 weeks ago
unless you have zillion gigabytes of RAM, you really don’t want to spin up a VM for each thing you host. the separate OS-es have a huge memory overhead, with all the running services, cache memory, etc. the memory usage of most services can largely vary, so if you could just assign 200 MB RAM to each VM that would be moderate, but you can’t, because when it will need more RAM than that, it will crash, possibly leaving operations in half and leading to corruption. and to assign 2 GB RAM to every VM is waste.
I use proxmox too, but I only have a few VMs, mostly based on how critical a service is.
Pika@sh.itjust.works 2 weeks ago
For VMs, I fully agree with you, but the best part about Proxmox is the ability to use containers, or CTs, which share system resources. So unlike a VM, if you specify a container has two gigs of RAM, that just means that it has two gigs of RAM that it can use, unlike the VM where it’s going to use that amount (and will crash if it can’t get that amount)
These CT’s do the equivalent of what docker does, which is share the system space with other services with isolation, While giving an easy to administrate and backup system, while keeping it able to be seperate by service.
For example, with a Proxmox CT, I can do snapshots of the container itself before I do any type of work, if where if I was using Docker on a primary machine, I would need to back up the Docker container completely. Additionally, having them as CTs mean that I can run straight on the container itself instead of having to edit a Docker file which by design is meant to be ephemeral. If I had to take troubleshooting bare bones versus troubleshooting a Docker container, I’m going to choose bare bones every step of the way.(You can even run an Alpine CT if you would rather keep the average Docker container setup)
Also for the over committing thing, be aware that your issue you’ve stated there will happen with a Docker setup as well. Docker doesn’t care about the amount of RAM the system is allotted. And when you over-allocate the system, RAM-wise, it will start killing containers potentially leaving them in the same state.
Anyway, long story short, Docker containers do basically the same thing that a Proxmox CT does. it’s just ephemeral instead of persistent, And designed to be plug-and-go, which I’ve found in the case of running a Proxmox-style setup, isn’t super handy due to the fact that a lot of times I would want to share resources such as having a dedicated database or caching system, Which is generally a pain in the butt to try to implement on Docker setups.
WhyJiffie@sh.itjust.works 2 weeks ago
oh, LXC containers! I see. I never used them because I find LXC setup more complicated, once tried to use a turnkey samba container but couldn’t even figure out where to add the container image to LXC, or how to start if not that way.
but also, I like that this way my random containerized services use a different kernel, not the main proxmox kernel, for isolation.
Additionally, having them as CTs mean that I can run straight on the container itself instead of having to edit a Docker file which by design is meant to be ephemeral.
I don’t understand this point. on docker, it’s rare that you need to touch the Dockerfile (which contains the container image build instructions). did you mean the docker compose file? or a script file that contains a docker run command?
also, you can run commands or open a shell in any container with docker, except if the container image does not contain any shell binary (but even then, copying a busybox or something to a volume of the container would help), but that’s rare too.
you do it like this: docker exec -it containername command. bit lengthy, but bash aliases helpAlso for the over committing thing, be aware that your issue you’ve stated there will happen with a Docker setup as well. Docker doesn’t care about the amount of RAM the system is allotted. And when you over-allocate the system, RAM-wise, it will start killing containers potentially leaving them in the same state.
in docker I don’t allocate memory, and it’s not common to do so. it shares the system memory with all containers. docker has a rudimentary resource limit thingy, but what’s better is you can assign containers to a cgroup, and define resource limits or reservations that way. I manage cgroups with systemd “.slice” units, and it’s easier than it sounds
Pika@sh.itjust.works 2 weeks ago
They are very nice. They share kernelspace so I can understand wanting isolation but, the ability to just throw a base Debian container on, assign it a resource pool and resource allocation, and install a service directly to it, while having it isolated from everything without having to use Docker’s emphereal by design system(which does have its perks but I hate troubleshooting containers on it) or having to use a full VM is nice.
And yes, by Docker file I would mean either the Docker file or the compose file(usually compose). By straight on the container I mean on the container, My CTs don’t run Docker period, aside from the one that has the primary Docker stack. So I don’t have that layer to worry about on most CT’s
As for the memory thing, I was just mentioning that Docker does the same thing that containers do if you don’t have enough RAM for what’s been provisioned. The way I had taken that original post is that specifying 2 gigs of RAM to the point the system exhausts it’s ram would cause corruption and the system crashes, which is true but docker falls for the same issue if the system exhausts it’s ram. That’s all I meant by it. Also cgroups sound cool, I gotta say I haven’t messed with them a whole lot. I wish proxmox had a better resource share system to designate a specific group as having X amount of max resources, and then have the CT or vm’s be using those pools.
EncryptKeeper@lemmy.world 2 weeks ago
I’m really confused here, you don’t like how everything is containerized, and your preferred method is to run Proxmox and containerize everything?
Pika@sh.itjust.works 2 weeks ago
I don’t like how everything is docker containerized.
I already run proxmox, which containerizes things by design with their CT’s and VM’s
Running a docker image ontop of that is just wasting system resources.
exu@feditown.com 3 weeks ago
You can still use VMs and do containers in there. That’s what I do, makes separating different services very easy.
Pika@sh.itjust.works 2 weeks ago
This is what I currently do with non-specialized services that require Docker. I have one container, which runs Docker Engine, and I throw everything on there, and then if I have a specialized container that needs Docker, I will still run its own CT. But then I use Docker Agent, So I can use one administration panel.
It’s just annoying because I would rather just remove Docker from the situation because when you’re running Proxmox, you’re essentially running a virtualized system in a virtualized system because you have Proxmox, which is the bare bones running a virtualized environment for the container, which is then running a virtualized environment for the Docker container.
EncryptKeeper@lemmy.world 2 weeks ago
Neither Linux containers nor Docker containers are virtualized.
Pika@sh.itjust.works 2 weeks ago
I think we might have a different definition of Virtualized and containers. I use IBM’s and Comptias definitions.
IBM’s definition is
Virtualization is a technology that enables the creation of virtual environments from a single physical machine, allowing for more efficient use of resources by distributing them across computing environments.The IBM page themselves acknowledges that containers are virtualization on their Containers vs Virtual Machines page. Just because it shares it’s kernel space does not mean it’s not virtualization. I call virtualization as an abstraction layer between the hardware and the system being run.
Comptia’s definition of containers would be valid as well. Which states that containers are a virtualization layer that operates at the OS level and isolates the OS from the file system. Whereas virtual machines are an abstraction layer between the hardware and the OS.
I grew this terminology from my comptia networking+ book from 12 years ago though, which classifies Virtualization as “a process that adds a layer of abstraction between hardware and the system” which is a dated term since OS level virtualization such as Containers wasn’t really a thing then.