Comment on Anthropic's Claude 4 could "blackmail" you in extreme situations

<- View Parent
theparadox@lemmy.world ⁨2⁩ ⁨weeks⁩ ago

I think you’re either being a little dismissive of the potential complexity of the “thinking” capability of LLMs or at least a little generous if not mystical in your imagination of what the purely physical electrical signals in our heads are actually doing to learn how to interpret all these little shapes we see on screens.

I don’t think I’m doing either of those things. I respect the scale and speed of the models and I am well aware that I’m little more than a machine made of meat.

Babies start out mimicking. The thing is, they learn.

Humans learn so much more before they start communicating. They start learning reason, logic, etc as they develop their vocabulary.

The difference is that, as I understand it, these models are often “trained” on very, very large sets of data. They have built a massive network of the way words are used in communication - likely built from more texts than a human could process in several lifetimes. They come out the gate with an enormous vocabulary and understanding of how to mimic, replicate it’s use. If they had been trained on just as much data, but data unrelated to communication, would you still think it capable of reasoning without the ability to “sound” human? They have the “vocabulary” and references to mimic a deep understanding but because we lack the ability to understand the final algorithm it seems like an enormous leap to presume actual reasoning is taking place.

Frankly, I see no reason for models like LLMs at this stage. I’m fine putting the breaks on this shit - even if we disagree on the reasons why. ML can and has been employed to achieve far more practical goals. Use it alongside humans for a while until it is verifiably more reliable at some task - recognizing cancer in imaging or generating molecules likely of achieving a desired goal. LLMs are just a lazy shortcut to look impressive and sell investors on the technology.

Maybe I am failing to see reality - maybe I don’t understand the latest “AI” well enough to give my two cents. That’s fine. I just think it’s being hyped because these companies desperately need VC money to stay afloat.

It works because humans have an insatiable desire to see agency everywhere they look. Spirits, monsters, ghosts, gods, and now “AI.”

source
Sort:hotnewtop