Comment on Expecting a LLM to become conscious, is like expecting a painting to become alive

<- View Parent
ji59@hilariouschaos.com ⁨14⁩ ⁨hours⁩ ago

I saw several papers about LLM safety (for example Alignment faking in large language models) that show some “hidden” self preserving behaviour in LLMs. But as I know, no-one understands whether this behaviour is just trained and does mean nothing or it emerged from the model complexity.

Also, I do not use the ChatGPT app, but doesn’t it have a live chat feature where it continuously listens to user and reacts to it? It can even take pictures. So the continuity isn’t a huge problem. And LLMs are able to interact with tools, so creating a tool that moves a robot hand shouldn’t be that complicated.

source
Sort:hotnewtop