Unless you want to call your predictive text on your keyboard a mind you really can’t call an LLM a mind. It is nothing more than a linear progression from that. Mathematically proven to not show any form of emergent behavior.
Comment on The first minds to be controlled by generative AI will live inside video games
MxM111@kbin.social 10 months agoWhile it is not alive, whether it is a mind is not a clear cut. It can be called kind of a mind, a mind different of that of human.
huginn@feddit.it 10 months ago
General_Effort@lemmy.world 10 months ago
It is obvious that you do not know what either “mathematical proof” or “emergence” mean. Unfortunately, you are misrepresenting the facts.
I don’t mean to criticize your religious (or philosophical) convictions. There is a reason people mostly try to keep faith and science separate.
huginn@feddit.it 10 months ago
Here’s a white paper explicitly proving:
No emergent properties (illusory due to bad measures)
Predictable linear progress with model size
The field changes fast, I understand it is hard to keep up
General_Effort@lemmy.world 10 months ago
As I said, you do not understand what these 2 terms mean. As such, you are incapable of understanding that paper.
Perhaps your native language is Italian, so here are links to the .it Wikipedia.
kogasa@programming.dev 10 months ago
No such thing has been “mathematically proven.” The emergent behavior of ML models is their notable characteristic. The whole point is that their ability to do anything is emergent behavior.
huginn@feddit.it 10 months ago
Here’s a white paper explicitly proving:
- No emergent properties (illusory due to bad measures)
- Predictable linear progress with model size
Do try and keep up.
kogasa@programming.dev 10 months ago
Sure, if you define “emergent abilities” just so. It’s obvious from context that this is not what I described.
MxM111@kbin.social 10 months ago
I do not think that it is “linear” progression. ANN by definition is nonlinear. Neither I think anything is “mathematically proven”. If I am wrong, please provide a link.
huginn@feddit.it 10 months ago
Sure thing: here’s a white paper explicitly proving:
- No emergent properties (illusory due to bad measures)
- Predictable linear progress with model size
MxM111@kbin.social 10 months ago
Thank you. This paper though does not state that there are no emergent abilities. It only states that one can introduce a metric with respect to which the emergent ability behaves smoothly and not threshold-like. While interesting, it only suggests that things like intelligence are smooth functions, but so what? Some other metrics show exponential or threshold dependence and whether the metric is right depends only how one will use it. And there is no law that emerging properties have to be threshold like. Quite the opposite - nearly all examples in physics that I know, the emergence appears gradually.
Corgana@startrek.website 10 months ago
Sorry you’re getting downvoted, you’re correct. It’s not implausible to assume that generative AI systems may have some kind of umwelt, but it is highly implausible to expect that it would be anything resembling that of a human (or animal). I think people are getting hung up on it because they’re assuming a lack of understanding language implies a lack of any concious experience. Humans do lots of things without understanding how they might be understood by others.
To be clear, I don’t think these systems have experience, but it’s impossible to rule out until an actual robust theory of mind comes around.
match@pawb.social 10 months ago
What can’t be a kind of mind to you?