GPUs aren’t just for graphics. They speed up all kinds of vector operations, including those used in “AI stuff”. I just never heard of NPUs before, so I imagine they may be hardwired for graph architecture of neural nets instead of linear algebra.
Initially, x86 CPUs didn’t have a FPU. It cost extra, and was delivered as a separate chip.
Later, GPU is just a overgrown SIMD FPU.
NPU is a specialized GPU that operates on low-precision floating-point numbers, and mostly does mostly matrix-multiply-and-add.
There is zero neural processing going on here, which would mean the chip operates using bursts of encoded analog signals, within power consumption of about 20W, and would be able to adjust itself on the fly online, without having a few datacenters spending exceeding amount of energy to update the weights of the model.
Gsus4@mander.xyz 1 day ago
GPUs aren’t just for graphics. They speed up all kinds of vector operations, including those used in “AI stuff”. I just never heard of NPUs before, so I imagine they may be hardwired for graph architecture of neural nets instead of linear algebra.
JATtho@lemmy.world 1 day ago
Initially, x86 CPUs didn’t have a FPU. It cost extra, and was delivered as a separate chip.
Later, GPU is just a overgrown SIMD FPU.
NPU is a specialized GPU that operates on low-precision floating-point numbers, and mostly does mostly matrix-multiply-and-add.
There is zero neural processing going on here, which would mean the chip operates using bursts of encoded analog signals, within power consumption of about 20W, and would be able to adjust itself on the fly online, without having a few datacenters spending exceeding amount of energy to update the weights of the model.
UsoSaito@feddit.uk 5 hours ago
NPUs do those calculations far more effectively than a GPU though is what I was meaning.