Comment on Grok praises Hitler, gives credit to Musk for removing “woke filters”
brucethemoose@lemmy.world 17 hours agoDeepSeek, now that is a filtered LLM.
The web version has a strict filter that cuts it off. Not sure about API access, but raw Deepseek 671B is actually pretty open. Especially with the right prompting.
There are also finetunes that specifically remove China-specific refusals:
huggingface.co/microsoft/MAI-DS-R1
huggingface.co/perplexity-ai/r1-1776
Note that Microsoft actually added saftey training to “improve its risk profile”
Grok losing the guardrails means it will be distilled internet speech deprived of decency and empathy.
Instruct LLMs aren’t trained on raw data.
It wouldn’t be talking like this if it was just trained on randomized, augmented conversations, or even mostly Twitter data. They cherry picked “anti woke” data to do this real quick, and the result effectively drove the model crazy. It has all the signatures of a bad finetune: specific overused phrases.
ggtdbz@lemmy.dbzer0.com 7 hours ago
That model is over a terabyte, I don’t know why I thought it was lightweight. Not that any reporting on machine learning has been particularly good, but this isn’t what I expected at all.
What can even run it?
brucethemoose@lemmy.world 4 hours ago
A lot, but less than you’d think! Basically a RTX 3090/threadripper system with a lot of RAM (192GB?)
With this framework, specifically: github.com/ikawrakow/ik_llama.cpp?tab=readme-ov-f…
The “dense” part of the model can stay on the GPU while the experts can be offloaded to the CPU, and the whole thing can be quantized to ~3 bits, instead of 8 bits like the full model.
That’s just for personal use, though. The intended way to run it is on a couple of H100 boxes, and to serve it to many, many, many users at once. LLMs run more efficiently when they serve in parallel. Eg generating tokens for 4 users isn’t much slower than generating them for 2.
anomnom@sh.itjust.works 6 hours ago
Data centers or a dude with a couple gpus and time on his hands?