Comment on AI Computing on Pace to Consume More Energy Than India, Arm Says
AlotOfReading@lemmy.world 6 months agoML is not an ENIAC situation. Computers got more efficient not by doing fewer operations, but by making what they were already doing much more efficient.
The basic operations underlying ML (e.g. matrix multiplication) are already some of the most heavily optimized things around. ML is inefficient because it needs to do a lot of that. The problem is very different.
crispyflagstones@sh.itjust.works 6 months ago
There’s an entire resurgence of research into alternative computing architectures right now, being led by some of the biggest names in computing, because of the limits we’ve hit with the von Neumann architecture as regards ML. I don’t see any reason to assume all of that research is guaranteed to fail.
AlotOfReading@lemmy.world 6 months ago
I’m not assuming it’s going to fail, I’m just saying that the exponential gains seen in early computing are going to be much harder to come by because we’re not starting from the same grossly inefficient place.
As an FYI, most modern computers are modified Harvard architectures, not Von Neumann machines. There are other architectures being explored that are even more exotic, but I’m not aware of any that are massively better on the power side (vs simply being faster). The acceleration approaches that I’m aware of that are more (e.g. analog or optical accelerators) are also totally compatible with traditional Harvard/Von Neumann architectures.
crispyflagstones@sh.itjust.works 6 months ago
And I don’t know that by comparing it to ENIAC I intended to suggest the exponential gains would be identical, but we are currently in a period of exponential gains in AI and it’s not exactly slowing down. It just seems unthoughtful and not very critical to measure the overall efficiency of a technology by its very earliest iterations, when the field it’s based on is moving as fast as AI is.