Comment on [deleted]
tursy@lemmy.world 5 days agoAs long as the LLM itself is good enough, follows instructions well and has example of similar interactions in its training set (which it definitely has from millions of books minimum and most likely also from public/private chats) it doesn’t really matter if it’s fine-tuned or not. For instance openai’s current LLMs like o4-mini etc are the best at math, coding etc but they are also very good at normal chatting, world knowledge etc. Even a fine-tuned math model can’t beat them. So fine-tuned does not mean it’s better at all. A fine-tuned “emotion” model will not be as good as a much better general-knowledge model because for a general-knowledge model you can compare benchmarks and select the best of the best which will of course then be among the best instruction followers etc. But the fine-tuned model on the other hand will be trained on a data-set which is optimal for that area/topic but will most likely be much worse as a LLM in general compared to the best of the best general-language model. So taking a general-language model that follows instructions very well and understands from context etc will be better than a “non-benchmarkable” ‘emotion’ model at least imo. Idk if I could explain it but hope it makes sense
Can I just ask AI to give me a prompt which I can use on it/another AI?
Yes sure, it’s just trial and error. You can make different custom instructions and save them in text-files. Basically templates for your “girlfriends”.
How much of VRAM does your GPU have?
8GBs
fishynoob@infosec.pub 5 days ago
Thank you, that makes sense. Yes, I will look to create templates using AI that I like. Thanks again for the help