cross-posted from: lemmy.sdf.org/post/29335261

cross-posted from: lemmy.sdf.org/post/29335160

Here is the original report.

The research firm SemiAnalysis has conducted an extensive analysis of what’s actually behind DeepSeek in terms of training costs, refuting the narrative that R1 has become so efficient that the compute resources from NVIDIA and others are unnecessary. Before we dive into the actual hardware used by DeepSeek, let’s take a look at what the industry initially perceived. It was claimed that DeepSeek only utilized “$5 million” for its R1 model, which is on par with OpenAI GPT’s o1, and this triggered a retail panic, which was reflected in the US stock market; however, now that the dust has settled, let’s take a look at the actual figures.