Comment on What can I use for an offline, selfhosted LLM client, pref with images,charts, python code execution

ViatorOmnium@piefed.social ⁨1⁩ ⁨day⁩ ago

The main limitation is the VRAM, but I doubt any model is going to be particularly fast.

I think phi3:mini on ollama might be an okish fit for python, since it's a small model, but was trained on python codebases.

source
Sort:hotnewtop