Comment on I'm looking for an article showing that LLMs don't know how they work internally
lgsp@feddit.it 2 days agoI’m aware of this and agree but:
-
I see that asking how an LLM got to their answers as a “proof” of sound reasoning has become common
-
this new trend of “reasoning” models, where an internal conversation is shown in all its steps, seems to be based on this assumption of trustable train of thoughts. And given the simple experiment I mentioned, it is extremely dangerous and misleading
-
take a look at this video: youtube.com/watch?v=Xx4Tpsk_fnM : everything is based on observing and directing this internal reasoning, and these guys are computer scientists. How can they trust this?
So having a good written article at hand is a good idea imho
Blue_Morpho@lemmy.world 2 days ago
I only follow some YouTubers like Digital Spaceport but there has been a lot of progress from years ago when LLM’s where only predictive. They now have an inductive engine attached to the LLM to provide logic guard rails.