Comment on [deleted]
fishynoob@infosec.pub 5 days agoI had never heard of Kobold AI. I was going to self-host Ollama and try with it but I’ll take a look at Kobold. I had never heard about controls on world-building and dialogue triggers either; there’s a lot to learn.
Will more VRAM solve the problem of not retaining context? Can I throw 48GB of VRAM towards an 8B model to help it remember stuff?
Yes, I’m looking at image generation (stable diffusion) too. Thanks
tal@lemmy.today 5 days ago
IIRC — I ran KoboldAI with 24GB of VRAM, so wasn’t super-constrained – there are some limits on the number of tokens that can be sent as a prompt imposed by VRAM, which I did not hit. However, there are also some imposed by the software; you can only increase the number of tokens that get fed in so far, regardless of VRAM. More VRAM does let you use larger, more “knowledgeable” models.
I’m not sure whether those are purely-arbitrary, to try to keep performance running, or if there are other technical issues with very large prompts.
It definitely isn’t capable of keeping the entire previous conversation (once you get one of any length) as an input to generating a new response, though.
fishynoob@infosec.pub 5 days ago
I see. Thanks for the note. I think beyond 48GB of VRAM diminishing returns set in very quickly so I’ll likely stick to that limit. I wouldn’t want to use models hosted in the cloud so that’s out of the question.