Comment on Hi, Jeffrey!
brucethemoose@lemmy.world 5 hours agoMeme finetunes are nothing new.
As an example, there are DPO datasets with positive/negative examples intended to train LLMs to respond politely and helpfully (as opposed to the negative response).
And the immediate community though was “…What if I *reversed them?”
khepri@lemmy.world 5 hours ago
haha just imaging people shwoing off their collections, “here’s my Mr. Rogers chatbot, and Thomas Jefferson, and even Luffy from One Piece! And uh…oh yeah over here we have EpsteinGPT for when I, I mean for if, um…its for lulz ok?! Where are you going?!”
brucethemoose@lemmy.world 4 hours ago
It’s literally “this one is my fursona. This one won’t refuse BDSM, but its not as eloquent. Oh, this one is lobotimized but really creative.” Here is an example, and note that is one of 115 models from one account:
huggingface.co/Mawdistical/RAWMAW-70B?not-for-all…
I am not exaggerating. And I love it. Furries have made some good code contributions to the space, like better sampling algorithms, not to speak of horny roleplayers.
Early on, there were a few ‘character’ finetunes or more generic ones like ‘talk like a pirate’ or talk only in emojiis. But as local models got more advanced (and the uncensoring got really good), they got so good at adopting personas anyway that the finetuning focused more on writing ‘style’ and storytelling than emulating specific characters. For example, one trained specifically to stick to the role of a dungeonmaster: huggingface.co/LatitudeGames/Nova-70B-Llama-3.3