Comment on Nvidia delivers first Vera Rubin AI GPU samples to customers — 88-core Vera CPU paired with Rubin GPUs with 288 GB of HBM4 memory apiece

<- View Parent
in_my_honest_opinion@piefed.social ⁨1⁩ ⁨day⁩ ago

Fundamentally no, linear progress requires exponential resources. The below article is about AGI but transformer based models will not benefit from just more grunt. We’re at the software stage of the problem now. But that doesn’t sign fat checks, so the big companies are incentivized to print money by developing more hardware.

https://timdettmers.com/2025/12/10/why-agi-will-not-happen/

Also the industry is running out of training data

https://arxiv.org/html/2602.21462v1

What we need are more efficient models, and better harnessing. Or a different approach, reinforced learning applied to RNNs that use transformers has been showing promise.

source
Sort:hotnewtop