Comment on What can I use for an offline, selfhosted LLM client, pref with images,charts, python code execution
hendrik@palaver.p3x.de 1 day ago
Maybe LocalAI? It doesn't do python code execution, but pretty much all of the rest.
Comment on What can I use for an offline, selfhosted LLM client, pref with images,charts, python code execution
hendrik@palaver.p3x.de 1 day ago
Maybe LocalAI? It doesn't do python code execution, but pretty much all of the rest.
catty@lemmy.world 1 day ago
This looks interesting - do you have experience of it? How reliable / efficient is it?
mitexleo@buddyverse.one 1 day ago
LocalAI is pretty good but resource-intensive. I ran it on a vps in the past.
hendrik@palaver.p3x.de 1 day ago
I think many people use it and it works. But sorry - no, I don't have any first-hand experience. I've tested it for a bit and it looked fine. Has a lot of features and it should be as efficient as any other ggml/llama.cpp based inference solution at least for text. I myself use KoboldCPP for the few things I do with AI and my computer is lacking a GPU so I don't really do a lot of images with software like this. And it's likely going to be less for you than the 15 minutes it takes me to generate an image on my unsuited machine.