Comment on [JS Required] MiniMax M1 model claims Chinese LLM crown from DeepSeek - plus it's true open-source
camilobotero@feddit.dk 1 day ago
Well… 🤔Image
Comment on [JS Required] MiniMax M1 model claims Chinese LLM crown from DeepSeek - plus it's true open-source
camilobotero@feddit.dk 1 day ago
Well… 🤔Image
LWD@lemm.ee 1 day ago
DeepSeek imposes similar restrictions, but only on their website. You can self-host and then enjoy relatively truthful (as truthful as a bullshit generator can be) answers about both Tianmen Square, Palestine, and South Africa (something American-made bullshit generators apparently like making up, to appease their corporate overlords or conspiracy theorists respectively).
Trimatrix@lemmy.world 1 day ago
Nope, Self hosted deepseek 8b thinking and distilled variants still clam up about Tianmen Square
LWD@lemm.ee 1 day ago
If you’re talking about the distillations, AFAIK they take somebody else’s model and run it through their (actually open-source) distiller. I tried a couple of those models because I was curious. The distilled Qwen model is cagey about Tianmen Square, but Qwen was made by Alibaba. The distillation of a US-made model did not have this problem.
I don’t have enough RAM to run the full DeepSeek R1, but AFAIK it doesn’t have this problem. Maybe it does.
In case it isn’t clear, BTW, I do despise LLMs and AI in general. The biggest issue with them isn’t the glaring lies (not Tianmen Square, and certainly not the “it’s woke!” complaints about generating images of black founding fathers, but the subtle and insidious little details like agreeableness - trying to get people to spend a little more time with them, which apparently turns once-reasonable people into members of micro-cults.
xcjs@programming.dev 1 day ago
That’s not how distillation works if I understand what you’re trying to explain.
If you distill model A to a smaller model, you just get a smaller version of model A with the same approximate distribution curve of parameters, but fewer of them. You can’t distill Llama into Deepseek R1.
I’ve been able to run distillations of Deepseek R1 up to 70B, and they’re all censored still. There is a version of Deepseek R1 “patched” with western values called R1-1776 that will answer topics censored by the Chinese government, however.