I don’t really get the analogy… of course a bunch of students using different tools with different inputs will yield different results? But if they use the same model and input at zero temperature, they will, in fact, get the same results.
Predictability has never been a strength of ML, of course.
…That’s not really what it’s for. It’s for finding exotic stars in astronomical data, or interpoliating pixels in an image, for identifying cat videos reasonably well. That’s still a useful tool. And the modern extension of getting a glorified autocomplete to press some buttons automatically is no different if structured and constrained appropriately.
The obvious problem, among many I see, is that these Tech Bros are selling agening LLMs as sapient magic lamps, not niche tools for very specific bits of automation. Just look at the language Suleyman is using:
I grew up playing Snake on a Nokia phone! The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me.