The bots (what the actual girlfriends or whatever other characters are) aren't the problem. You can find them on chub.ai for example or write them yourself fairly easily. The issue the software, and even more so the hardware. You need something like the mentioned Kobold.ccp or oobabooga, and then you'd also need a trained LLM model that you can get on huggingface.co, which is already where it gets complicated (they'll be loaded within kobold or oobabooga). You also need to understand how they work in regards to context sizes & bytes, because they need a lot, and I mean A LOT of vram to work properly. Basically, the more vram you have, the better the contextual understanding, their memory is. Otherwise you'd have a bot that maybe knows to only contextualize the last couple messages. For paid services like novelai.net you basically have your bots run through big ass server farms with lots of GPUs that bundle their vram and processing power, giving you "decent" context sizes (imo the greatest weak point of LLMs and it is deeply rooted in how they work) and decent speed. NovelAI also supports front-ends like SillyTavern which is great for local bot management and settings, regardless if you self host or use a paid service (NOT EVERY PAID SERVICE HAS AN API FOR THIS! OpenAI's ChatGPT technically does too but they do not allow NSFW content and can ban you for that if caught).
There's a bunch of "free" online services too, like janitorai.com but most of them have slow speeds and the chat degrades significantly after just a few messages, because they have low context sizes. The better / paid models suffer from this degradation too but it is slower and less noticeable, at least at first. You can use that to get an idea of how LLMs work though.
Comment on Your AI Girlfriend Is a Data-Harvesting Horror Show
Gork@lemm.ee 8 months ago
Are there any Open Source girlfriends that we can download and compile?
DarkThoughts@kbin.social 8 months ago
Gork@lemm.ee 8 months ago
Does it make it faster if the GPU has waifu stickers on it?
DarkThoughts@kbin.social 8 months ago
I don't know, I'm not a weeb.
Turun@feddit.de 8 months ago
Define “it”
Because waifu stickers may indeed speed up “it” for some definition of “it”
HelloHotel@lemmy.world 8 months ago
if you find a magic streaming service that wasn’t there before selling girlfriend video rentals, RUN! (points if anyone got the reference)
SwampYankee@mander.xyz 8 months ago
Basically, the more vram you have, the better the contextual understanding, their memory is. Otherwise you’d have a bot that maybe knows to only contextualize the last couple messages.
Hmm, if only there was some hardware analogue for long-term memory.
OKRainbowKid@feddit.de 8 months ago
What are you trying to say? Do you understand what the problem is?
SwampYankee@mander.xyz 8 months ago
I guess I’m wondering if there’s some way to bake the contextual understanding into the model instead of keeping it all in vram. Like if you’re talking to a person and you refer to something that happened a year ago, you might have to provide a little context and it might take them a minute, but eventually, they’ll usually remember. Same with AI, you could say, “hey remember when we talked about [x]?” and then it would recontextualize by bringing that conversation back into vram.
Seems like more or less what people do with Stable Diffusion by training custom models, or LORAs, or embeddings. It would just be interesting if it was a more automatic process as part of interacting with the AI - the model is always being updated with information about your preferences instead of having to be told explicitly.
But mostly it was just a joke.
DarkThoughts@kbin.social 8 months ago
Yes, databases (saved on a hard drive). SillyTavern has Smart Context but that seems not that easy to install so I have no idea how well that actually works in practice yet.
pennomi@lemmy.world 8 months ago
Pretty easy to roll your own with Kobold.cpp and various open model weights found on HuggingFace.
TipRing@lemmy.world 8 months ago
Also for an interface, I’d recommend KoboldLite for writing or assistant and SillyTavern for chat/RP.
DarkThoughts@kbin.social 8 months ago
I tried oobabooga and it basically always crashes when I try to generate anything, no matter what model I try. But honestly, as far as I can tell all the good models require absurd amounts of vram, much more than consumer cards have, so you'd need at least like a small gpu server farm to local host them reliably yourself. Unless of course you want like practically nonexistent context sizes.
exu@feditown.com 8 months ago
You’ll want to use a quantised model on your GPU. You could also use the CPU and offload some parts to the GPU with llama.cpp (an option in oobabooga). Llama.cpp models are in the GGUF format.
e-ratic@kbin.social 8 months ago
Ask Kreiger https://files.catbox.moe/e0i784.jpg
itsAsin@lemmy.world 8 months ago
i second this request. please
DarkThoughts@kbin.social 8 months ago
See my other reply for some basic info & pointers.
herrcaptain@lemmy.ca 8 months ago
Hey now, I don’t want anyone looking at my girlfriend’s source code. That’s personal!
demonsword@lemmy.world 8 months ago
it’s okay, dude, we all already did…