Comment on Imagine not being able to shower, because AI slop generator machines need that water!
grue@lemmy.world 10 hours agoAt scale? None. If we assume that (a) the number of queries are constant (i.e. that the slow response doesn’t drive away users) and (b) that the efficiency is the same whether it’s fast or slow, then having computers that take longer to calculate each response just means you need to have more of them working in parallel to service the demand.
Now, for a home user running AI locally, you could maybe save some energy by using more efficient silicon since you only need it to process one query at a time (assuming lower-spec parts actually are more efficient, which may or may not be the case), but that’s not really what we’re talking about here.
nossaquesapao@lemmy.eco.br 10 hours ago
Maybe you wanted to answer to the original comment? I was mostly ironizing it and mentioning a reduction in overall usage.