“China bad”
*sounds legit
Sounds legit is what one hears about FUD spread by alglophone media every time the US oligarchy is caught with their pants down.
Snowden: “US is illegally spying on everyone”
Media: Snowden is Russia spy
*Sounds legit
France: US should not unilaterally invade a country
Media: Iraq is full of WMDs
*Sounds legit
DeepSeek: Guys, distillation and body of experts is a way to save money and energy, here’s a paper on how to do same.
Media: China bad, deepseek must be cheating
*Sounds legit
pennomi@lemmy.world 1 week ago
The open paper they published details the algorithms and techniques used to train it, and it’s been replicated by researchers already.
legolas@fedit.pl 1 week ago
So are these techiques so novel and breaktrough? Will we now have a burst of deepseek like models everywhere? Cause that’s what absolutely should happen if the whole storey is true. I would assume there are dozens or even hundreds of companies in USA that are in a posession of similar number but surely more chips that Chinese folks claimed to trained their model on.
ArchRecord@lemm.ee 1 week ago
The general concept, no. (it’s reinforcement learning, something that’s existed for ages)
The actual implementation, yes. (training a model to think using a separate XML section, reinforcing with the highest quality results from previous iterations using reinforcement learning that naturally pushes responses to the highest rewarded outputs) Most other companies just didn’t assume this would work as well as throwing more data at the problem.
This is actually how people believe some of OpenAI’s newest models were developed, but the difference is that OpenAI was under the impression that more data would be necessary for the improvements, and thus had to continue training the entire model with ** additional new information**, whereas DeepSeek decided to simply scrap that part altogether and go solely for reinforcement learning.
Probably, yes. Companies and researchers are already beginning to use this same methodology. Here’s a writeup about S1, a model that performs up to 27% better than OpenAI’s best model. S1 used Supervised Fine Tuning, and did something so basic, that people hadn’t previously thought to try it: Just making the model think longer by modifying terminating XML tags.
This was released days after R1, based on R1’s initial premise, and creates better quality responses. Oh, and of course, it cost $6 to train.
So yes, I think it’s highly probable that we see a burst of new models, or at least improvements to existing ones. (Nobody has a very good reason to make a whole new model of a different name/type when they can simply improve the one they’re already using and have implemented)