Comment on Microsoft Needs So Much Power to Train AI That It's Considering Small Nuclear Reactors
frezik@midwest.social 1 year agoOh, they’re working on it. It’s dumb, but it’s happening.
Comment on Microsoft Needs So Much Power to Train AI That It's Considering Small Nuclear Reactors
frezik@midwest.social 1 year agoOh, they’re working on it. It’s dumb, but it’s happening.
FooBarrington@lemmy.world 1 year ago
I can’t imagine they are. What would the training data of those models be? Why would you train the model when the user sent a request? Why would you wait responding to the request until the model is trained?
frezik@midwest.social 1 year ago
Often, these models are a feedback loop. The input from one search query is itself training data that affects the result of the next query.
FooBarrington@lemmy.world 1 year ago
Sure, but that’s not done with the kind of model this thread is about (separate training and inference). You’re talking about classical ML models with continuous updates, which you wouldn’t run on this kind of GPU infrastructure.