Buffalox@lemmy.world 8 months ago
Sorry I have doubts, because that would require a factor 4x increase every year for 10 years! 4x^10 = 1.048.576x
Considering they historically have had problems achieving just twice the speed per year, it does not seem likely.
rhebucks-zh@incremental.social 8 months ago
Buffalox@lemmy.world 8 months ago
Yes, but usually we keep those 2 kinds of optimizations separate, only combining chip design and production process. Because if the software is optimized, the hardware isn’t really doing the same thing.
So yes AI speed may increase more than just the hardware, but for the most sophisticated systems, the tasks will be more complex, which may again slow the software down.
So I think they will never be able to achieve it even when considering software optimizations too. Just the latest Tesla cars boast about 4 times higher resolution cameras, that will require 4 times the processing power to process.
We are not where we want to be, and the systems of the future will clearly be more complex, and on the software are more likely to be slower than faster.rhebucks-zh@incremental.social 8 months ago
even software that does the same thing gets slower example: Microsoft Office, Amazon, the web in general, etc.
Buffalox@lemmy.world 8 months ago
That is so true, increased complexity tend to slow things down.
ryannathans@aussie.zone 8 months ago
Twice for AI or computing in general?
Buffalox@lemmy.world 8 months ago
Why does that make a difference? Compute for AI is build on the progress for compute first for GPU then for data center. They are similar in nature.
ryannathans@aussie.zone 8 months ago
Building an ASIC for purpose built computation is significantly faster than generic gpu compute cores. Like when ASICs were built for bitcoin mining/sha256 and a little 5 watt usb device could outperform the best GPUs
frezik@midwest.social 8 months ago
It may be even more specialized than that. It might be a return to analog computers.
Which isn’t going to work for Nvidia’s traditional products, either.
Buffalox@lemmy.world 8 months ago
The H200 is evolved from Nvidia GPU designs, and will be by far the most powerful AI component in existence when it arrives later this year, AI is now so complex, that it doesn’t really make sense to call it an ASIC, and the cost is $40,000.- so no not small 5 watt units.
fidodo@lemmy.world 8 months ago
There’s also software improvements to consider, there’s a lot of room for efficiency improvements.