I tried oobabooga and it basically always crashes when I try to generate anything, no matter what model I try. But honestly, as far as I can tell all the good models require absurd amounts of vram, much more than consumer cards have, so you'd need at least like a small gpu server farm to local host them reliably yourself. Unless of course you want like practically nonexistent context sizes.
Comment on Your AI Girlfriend Is a Data-Harvesting Horror Show
pennomi@lemmy.world 8 months agoPretty easy to roll your own with Kobold.cpp and various open model weights found on HuggingFace.
DarkThoughts@kbin.social 8 months ago
exu@feditown.com 8 months ago
You’ll want to use a quantised model on your GPU. You could also use the CPU and offload some parts to the GPU with llama.cpp (an option in oobabooga). Llama.cpp models are in the GGUF format.
TipRing@lemmy.world 8 months ago
Also for an interface, I’d recommend KoboldLite for writing or assistant and SillyTavern for chat/RP.