Comment on I'm looking for an article showing that LLMs don't know how they work internally
tal@lemmy.today 5 days ago
Define “know”.
-
An LLM can have text describing how it works and be trained on that text and respond with an answer incorporating that.
-
LLMs have no intrinsic ability to “sense” what’s going on inside them, nor even a sense of time. It’s just not an input to their state. You can build neural-net-based systems that do have such an input, but Stable Diffusion or whatever isn’t that.
-
LLMs lack a lot of the mechanisms that I would call essential to be able to solve problems in a generalized way. While I think Dijkstra had a valid point:
The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.
…and we shouldn’t let our prejudices about how a mind “should” function internally cloud how we treat artificial intelligence…it’s also true that we can look at an LLM and say that it just fundamentally doesn’t have the ability to do a lot of things that a human-like mind can. An LLM is, at best, something like a small part of our mind. While extracting it and playing with it in isolation can produce some interesting results, there’s a lot that it can’t do on its own: it won’t, say, engage in goal-oriented behavior. Asking a chatbot questions that require introspection and insight on its part won’t yield interesting result, because it can’t really engage in introspection or insight to any meaningful degree. It has very little mutable state, unlike your mind.