Comment on For people self hosting LLMs.. I have a couple docker images I maintain
fhein@lemmy.world 1 year ago
Awesome work! Going to try out koboldcpp right away. Currently running llama.cpp in docker on my workstation because it would be such a mess to get cuda toolkit installed natively…
Out of curiosity, isn’t conda a bit redundant in docker since it already is an isolated environment?
noneabove1182@sh.itjust.works 1 year ago
Yes that’s a good comment for an FAQ cause I get it a lot haha. The reason I use it is for image size, the base nvidia devel image is needed for a lot of compilation during python package installation and is huge, so instead I use conda, transfer it to the nvidia-runtime image which is… also pretty big, but it saves several GB of space so it’s a worthwhile hack :)
fhein@lemmy.world 1 year ago
Ah, nice.
Btw. perhaps you’d like to add:
build: .
to docker-compose.yml so you can just write “docker-compose build” instead of having to do it with a separate docker command. I would submit a PR for it but I have made a bunch of other changes to that file so it’s probably faster if you do it.