Comment on Getting worn out with all these docker images and CLI hosted apps
EncryptKeeper@lemmy.world 1 week agoI’m really confused here, you don’t like how everything is containerized, and your preferred method is to run Proxmox and containerize everything?
Pika@sh.itjust.works 1 week ago
I don’t like how everything is docker containerized.
I already run proxmox, which containerizes things by design with their CT’s and VM’s
Running a docker image ontop of that is just wasting system resources.
EncryptKeeper@lemmy.world 1 week ago
Nothing is “docker containerized”. Docker is just a daemon and set of tools for managing OCI compliant containers.
No? If you spun up one VM in Proxmox and installed docker and used it to run 10 containers, that would use fewer system resources than running 10 LXC containers directly on Proxmox.
Pika@sh.itjust.works 1 week ago
are you are saying running docker in a container setup(which at this point would be 2 layers deep) uses less resources than 10 single layer deep containers?
I can agree with the statement that a single VM running docker with 10 containers uses less than 10 CT’s with docker installed then running their own containers(but that’s not what I do, or what I am asking for). I currently do use one CT that has docker installed with all my docker images, which I wouldn’t do if I had the ability not to but some apps require docker) but this removes most of the benefits you get using proxmox in the first place. One of the biggest advantages of using the hypervisor as a whole is the ability to isolate and run services as their own containers. Throwing everything into a VM with docker bypasses that while adding headway to the system.
For explanation. Installing docker into a VM on proxmox then running every container in that does waste resources. You have the resources that docker requires to function (which is currently 4 gigs of ram per their website but when testing I’ve seen as low as 1 gig work fine)+ cpu and whatever storage it takes up) in a VM(which also uses more than CT’s do as they no longer share kernel). When compared to 10 CT’s that are finetuned to their specific app, you will have better performance running the CT’s than a VM running everything, while keeping your ability to snapshot and removing the extra layer and ephemeral design that docker has(this can be a good and bad thing, but when troubleshooting I learn towards good).
EncryptKeeper@lemmy.world 1 week ago
If those 10 single layer deep containers are Proxmox’s LXC containers then yes, absolutely. OCI containers are isolated processes that run single services, usually just a single binary. There’s no OS, no init system. They’re very lightweight with very little overhead. They’re “containerized services”. LXC containers on the other hand are very heavy “system containers” that have a full OS and user space, init system, file systems etc. They are one step removed from being full size VMs. In short, your single LXC running docker and a bunch of containers inside of it is far more resource efficient than running a bunch of separate LXC containers.
I mean that’s exactly what docker containers do but more efficiently.
I mean that’s sort of the entire idea behind docker containers as well. It can even be automated for zero downtime updates and deployments, as well as rollbacks.
That is incorrect. Let’s break away from containers and VMs for a second and look deeper into what is happening under the hood here.
Option A (Docker + containers): One OS, One Init system, one full set of Linux libraries.
Option B (10 LXC containers): Ten full operating systems, ten separate init systems, 10 separate sets of full Linux libraries.
Option A is far more lightweight, and becomes a more attractive option the more services you add.
And not only that, but as you found out, you don’t need to run a full VM for your docker host. You could just use an LXC. Though in that case I’d still prefer the one VM, so that your containers aren’t sharing your Proxmox Host’s kernel.