Oh wait, I think I misunderstood. I thought you had local language models running on your computer. I have seen that be discusses before with varying results.
Last time I tried running my own model was in the early days of the Llama release and randomness it on a RTX 3060. The Spend of delivery was much shower that OpenAI’s API and the material was way off.
It doesn’t have to be perfect, but I’d like to do my own API calls from a remote device phoning home instead of OpenAI’s servers. Using my own documents as a reference would be a plus to, just to keep my info private and still accessible by the LLM.
Didn’t know about Elevenlabs. Checking them out soon.
stevedidwhat_infosec@infosec.pub 1 year ago
That could be fun! I’ve made and trained my own models prior but I find that getting the right amount of data (in terms of both size and diversity to ensure features are orthogonal out of the gate) can be pretty tough.
If you don’t get that right balance of size and diversity in your data, that efficacy upper limit is gonna be way lower than you’d like, but you might have some good data sets laying around I got no clue ^_^
Lemmy know how it goes!