Viruses and prions: “Allow us to introduce ourselves”
Comment on Expecting a LLM to become conscious, is like expecting a painting to become alive
ji59@hilariouschaos.com 1 day ago
Except … being alive is well defined. But consciousness is not. And we do not even know where it comes from.
rockerface@lemmy.cafe 1 day ago
ji59@hilariouschaos.com 18 hours ago
I meant alive in the context of the post. Everyone knows what painting becoming alive means.
MyTurtleSwimsUpsideDown@fedia.io 21 hours ago
Two words “contagious cancer”
rockerface@lemmy.cafe 20 hours ago
Cancer is at least made out of cells. Viruses are just proteins dipped in evil
peopleproblems@lemmy.world 20 hours ago
Not fully, but we know it requires a minimum amount of activity in the brains of vertabrates, and at least observable in some large invertebrates.
I’m vastly oversimplifying and I’m not an expert, but essentially all consciousness is, is an automatic processing state of all present stimulation in a creatures environment that allows it to react to new information in a probably survivable way, and allow it to react to it in the future with minor changes in the environment. Hence why you can scare an animal away from food while a threat is present, but you can’t scare away an insect.
It appears that the frequency of activity is related to the amount of information processed and held in memory. At a certain threshold of activity, most unfiltered stimulus is retained to form what we would call consciousness - in the form of maintaining sensory awareness and at least in humans, thought awareness. Below that threshold both short term and long term memory are impaired, and no response to stimulation occurs. Basic autonomic function is maintained, but severely impacted.
ji59@hilariouschaos.com 18 hours ago
Okay, so by my understanding on what you’ve said, LLM could be considered conscious, since studies pointed to their resilience to changes and attempts to preserve themselves?
LesserAbe@lemmy.world 16 hours ago
Yeah, it seems like the major obstacles to saying an llm is conscious, at least in an animal sense, is 1) setting it up to continuously evaluate/generate responses even without a user prompt and 2) allowing that continuous analysis/response to be incorporated into the llm training.
The first one seems like it would be comparatively easy, get sufficient processing power and memory, then program it to evaluate and respond to all previous input once a second or whatever
The second one seems more challenging, as I understand it training an llm is very resource intensive. Right now when it “remembers” a conversation it’s just because we prime it by feeding every previous interaction before the most recent query when we hit submit.
ji59@hilariouschaos.com 14 hours ago
As I said in another comment, doesn’t the ChatGPT app allow a live converation with a user? I do not use it, but I saw that it can continuously listen to the user and react live to it, even use a camera. There is a problem with the growing context, since this limited. But I saw in some places that the context can be replaced with LLM generated chat summary. So I do not think the continuity is a obstacle. Unless you want unlimited history with all details preserved.
SkavarSharraddas@gehirneimer.de 16 hours ago
IMO language is a layer above consciousness, a way to express sensory experiences. LLMs are "just" language, they don't have sensory experiences, they don't process the world, especially not continuously.
Do they want to preserve themselves? Or do they regurgitate sci-fi novels about "real" AIs not wanting to be shut down?
ji59@hilariouschaos.com 14 hours ago
I saw several papers about LLM safety (for example Alignment faking in large language models) that show some “hidden” self preserving behaviour in LLMs. But as I know, no-one understands whether this behaviour is just trained and does mean nothing or it emerged from the model complexity.
Also, I do not use the ChatGPT app, but doesn’t it have a live chat feature where it continuously listens to user and reacts to it? It can even take pictures. So the continuity isn’t a huge problem. And LLMs are able to interact with tools, so creating a tool that moves a robot hand shouldn’t be that complicated.
finitebanjo@piefed.world 4 hours ago
Why are there so many nearly identical comments claiming we don’t know how brains work?