Comment on America Isn’t Ready for What AI Will Do to Jobs
LodeMike@lemmy.today 16 hours agoThose gains won’t continue into the future. Transformers are a mostly flushed technology, at least from the strictly tech/math side. New use cases or specialized sandboxes are still new tech (keyboard counts as a sandbox).
ruuster13@lemmy.zip 16 hours ago
Moore’s Law isn’t quite dead. And quantum computing is a generation away. Computers will continue getting exponentially faster.
bunchberry@lemmy.world 44 minutes ago
Moore’s law died a long time ago. Engineers pretended it was going on for years by abusing the nanometer metric, by saying that if they cleverly find a way to use the space more effectively then it is as if they packed more transistors into the same nanometers of space, and so they would say it’s a smaller nanometer process node, even though quite literal they did not shrink the transistor size and increase the number of transistors on a single node.
This actually started to happen around 2015. These clever tricks were always exaggerated because there isn’t an objective metric to say that a particular trick on a 20nm node really gets you performance equivalent to 14nm node, so it gave you huge leeway for exaggeration. In reality, actual performance gains drastically have started to slow down since then, and the cracks have really started to show when you look at the 5000 series GPUs from Nvidia.
The 5090 is only super powerful because the die size is larger so it fits more transistors on the die, not because they actually fit more per nanometer. If you account for the die size, it’s actually even less efficient than the 4090 and significantly less efficient than the 3090. In order to pretend there have been upgrades, Nvidia has been releasing software for the GPUs for AI frame rendering and artificially locking the AI software behind the newer series GPUs. The program Lossless Scaling proves that you can in theory run AI frame rendering on any GPU, even ones from over a decade ago, and that Nvidia’s locking of it behind a specific GPU is not hardware limitation but them trying to make up for lack of actual improvements in the GPU die.
Chip improvements have drastically slowed done for over a decade now and the industry just keeps trying to paper it over.
LodeMike@lemmy.today 16 hours ago
No.
We know how they work. They’re purely statistical models. They don’t create, they recreate training data based on how well it was stored in the model.
squaresinger@lemmy.world 14 hours ago
The problem is with hardware requirements scaling exponentially with AI performance. Just look at RAM and computation consumption increasing compared to the performance of the models.
Anthropic recently announced that since the performance of one agent isn’t good enough it will just run teams of agents in parallel on single queries, thus just multiplying the hardware consumption.
Exponential growth can only continue for so long.