I think that this shows that LLMs are not intelligent, in that they repeat what they’ve been fed
LLMs are redditors confirmed.
Comment on Boffins find AI models tend to escalate conflicts to all-out nuclear war
datendefekt@lemmy.ml 10 months ago
Do the LLMs have any knowledge of the effects of violence or the consequences of their decisions? Do they know that resorting to nuclear war will lead to their destruction?
I think that this shows that LLMs are not intelligent, in that they repeat what they’ve been fed, without any deeper understanding.
I think that this shows that LLMs are not intelligent, in that they repeat what they’ve been fed
LLMs are redditors confirmed.
CosmoNova@lemmy.world 10 months ago
In fact they do not have any knowledge at all. They do make clever probability calculations but in the end of the day concepts like geopolitics and war are far more complex and nuanced than giving each phrase a value and trying to calculate it.
And even if we manage to create living machines, they‘ll still be human made, containing human flaws and likely not even by the best experts in these fields.
rottingleaf@lemmy.zip 10 months ago
As in “an LLM doesn’t model the domain of the conversation in any way, it just extrapolates what the hivemind says on the subject”.