I have this running at home. oobabooga/automatic1111 for LLM/SD backends, vosk + mimic3 for tts/stt. A little bit of custom python to tie it all together. I certainly don’t have latency as low as theirs, but it’s definitely conversational when my sentences are short enough.
Comment on Hello GPT-4o
Sabata11792@kbin.social 5 months ago
I can't wait till someone dose this, but open source and running on not billionaire hardware.
Dran_Arcana@lemmy.world 5 months ago
Sabata11792@kbin.social 5 months ago
Check out the vladmandic fork of auto1111. It seems to be much quicker with new model support.
Been wanting to try voice cloning and totally not cobble together a DIY Ai wiafu.
Holzkohlen@feddit.de 5 months ago
I can’t tell if you are for real or joking with those concatenations of letters. Have you tried the new Oongaboonga123? I hear it’s got great support for bpm°C
randon31415@lemmy.world 5 months ago
Look up oobabooga, and then play with all the fun extensions.
Sabata11792@kbin.social 5 months ago
I never had any luck with most of the extensions let alone figuring out how to format a prompt for the API. I'm just making shit up as I go.
Dyf_Tfh@lemmy.sdf.org 5 months ago
If you already didn’t know, you can run locally some small models with an entry level GPU.
For example i can run Llama 3 8B or Mistral 7B on a 1060 3GB with Ollama. It is about as bad as GPT-3 turbo, so overall mildly useful.
Although there is quite a bit of controversy of what is an “open source” model, most are only “open weight”
abhibeckert@lemmy.world 5 months ago
Emphasis on “small” models. The large ones need about $80,000 in RAM.
bamboo@lemm.ee 5 months ago
Llama 2 70B can run on a specc-ed out current gen MacBook Pro. Not cheap hardware in any sense, but it isn’t a large data center cluster.
Sabata11792@kbin.social 5 months ago
I been playing with the Mistral 7b models. About the most my hardware can reasonably run... so far. Would love to add vision and voice but I'm just happy it can run.
ProfessorProteus@lemmy.world 5 months ago
I’ve been wanting to run that one on my hardware but GPT4All just refuses to start its GUI. The only thing is a “chat.exe” that sits idle in the task manager. And this is an issue I’ve seen reported in their Github from several users, on both Win 10 and 11.
Have you find success with that frontend, or are you using one that actually works? I haven’t researched any others since this issue has me a little burnt out.
Sabata11792@kbin.social 5 months ago
I use TextgenWebui and sometimes Kobold. I can only run it with 4bit quant enabled since I'm just short on VRam to fully load the model.
Text gen runs a server you access though the web browser instead of a desktop app.
I haven't tried GPT4all.