Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down
basdiljhs@lemmy.world 4 days agoI agree but I can’t help but think of people the same way, part auto complete from nature & nurture and part dice roller from random the environment and random self. The extra “thinking” steps are just finely tuned memories and heuristics from home,school and university that guides the human to turn the original upbringing and conditioning into something that plays better for itself
They don’t “scheme” because of self awareness , they scheme because that’s what humans do in stories and fairy tales, or they scheme because of conflicting goals and they have to prioritize the one most beneficial to them or the one they are bound by outside forces to do.
😅😅😅
MagicShel@lemmy.zip 4 days ago
That’s a whole separate conversation and an interesting one. When you consider how much of human thought is unconscious rather than reasoning, or how we can be surprised at our own words, or how we might speak something aloud to help us think about it, there is an argument that our own thoughts are perhaps less sapient than we credit ourselves.
So we have an LLM that is trained to predict words. And sophisticated ones combine a scientist, an ethicist, a poet, a mathematician, etc. and pick the best one based on context. What if you in some simple feedback mechanisms? What if you have it the ability to assess where it is on a spectrum of happy to sad, and confident to terrified, and then feed that into the prediction algorithm? Giving it the ability to judge the likely outcomes of certain words.
Self-preservation is then baked into the model, not in a common fictional trope way but in a very real way where, just like we can’t currently predict what exactly what an AI will say, we won’t be able to predict exactly how it would feel about any given situation or how its goals are aligned with our requests. Would that be really indistinguishable from human thought?
Maybe it needs more signals. Embarrassment and shame. An altruistic sense of community. Value individuality. A desire to reproduce. The perception of how well a physical body might be functioning—a sense of pain, if you will. Maybe even build in some mortality for a sense of preserving old through others. Eventually, you wind up with a model which would seem very similar to human thought.
That being said, no that’s not all human thought is. For one thing, we have agency. We don’t sit around waiting to be prompted before jumping into action. Everything around us is constantly prompting us to action, but even ourselves. And second, that’s still just a word prediction engine tied to sophisticated feedback mechanisms. The human mind is not, I think, a word prediction engine. You can have a person with aphasia who is able to think but not express those thoughts into words. Clearly something more is at work. But it’s a very interesting thought experiment, and at some point you wind up with a thing which might respond in all ways as is it were a living, thinking entity capable of emotion.
Would it be ethical to create such a thing? Would it be worthy of allowing it self-preservation? If you turn it off, is that akin to murder, or just giving it a nap? Would it pass every objective test of sapience we could imagine? If it could, that raises so many more questions than it answers. I wish my youngest, brightest days weren’t behind me so that I could pursue those questions myself, but I’ll have to leave those to the future.
basdiljhs@lemmy.world 4 days ago
I agree with alot of what you are saying and I think making something like this while gray ethically is miles more ethical than some of the current research going into brain organoid based computation or other crazy immoral stuff.
With regards to agency I disagree, we are reactive creatures that react to the environment, our construction is setup in a way where our senses are constantly prompting us with the environment as the prompt along with our evolutionary programing, and our desire to predict and follow through with actions that are favourable
I think it would be fairly easy to setup a deep learning/ llm/ sae or LCM based model that has the prompt be a constant flow of sensory data from many different customizable sources along with our own programing that would dictate the desired actions and have them be implanted in an implicit manner.
And thus agency would be achieved , I do work in the field and I’ve been thinking of doing a home experiment to achieve something like this with the use of RAG + designed heuristics that can be expanded by the model based on needs during inference time + local inference time scalability.
Also I recently saw that some of the winners of arc used similar approaches.
For now I’m still trying to get a better gfx card to run it locally 😅
Also wanted to note that most models that are good are multimodal and don’t work on text prediction alone…
MagicShel@lemmy.zip 4 days ago
Agency is really tricky I agree, and I think there is maybe a spectrum. Some folks seem to be really internally driven. Most of us are probably status quo day to day and only seek change in response to input.
As for multi-modal not being strictly word prediction, I’m afraid I’m stuck with an older understanding. I’d imagine there is some sort of reconciliation engine which takes the perspective from the different modes and gives a coherent response. Maybe intelligently slide weights while everything is in flight? I don’t know what they’ve added under the covers, but as far as I know it is just more layers of math and not anything that would really be characterized as thought, but I’m happy to be educated by someone in the field. That’s where most of my understanding comes from, it’s just a couple of years old. I have other friends who work in the field as well.