Looks so real !
I had a poster in ‘86 that I wanted to come alive.
Submitted 3 weeks ago by kSPvhmTOlwvMd7Y7E@lemmy.world to showerthoughts@lemmy.world
Looks so real !
I had a poster in ‘86 that I wanted to come alive.
Thank you for calling it an LLM.
Although, if a person knowing the context still acts confused when people complain about AI, its about as honest as somebody trying to solve for circumference with an apple pie.
As long as we can’t even define sapience in biological life, where it resides and how it works, it’s pointless to try and apply those terms to AI. We don’t know how natural intelligence works, so using what little we know about it to define something completely different is counterintuitive.
We don’t know what causes gravity, or how it works, either. But you can measure it, define it, and even create a law with a very precise approximation of what would happen when gravity is involved.
I don’t think LLMs will create intelligence, but I don’t think we need to solve everything about human intelligence before having machine intelligence.
Though in the case of consciousness - the fact of there being something it’s like to be - not only don’t we know what causes it or how it works, but we have no way of measuring it either. There’s zero evidence for it in the entire universe outside of our own subjective experience of it.
Pointless and maybe a little reckless.
100 billion glial cells and DNA for instructions. When you get to replicating that lmk but it sure af ain’t the algorithm made to guess the next word.
And not even a good painting but an inconsistent one, whose eyes follow you around the room, and occasionally tries to harm you.
That kind of painting seems more likely to come alive
New fear unlocked!
… What the hell, man?!
ಥ_ಥ
Bro have you never seen a Scooby Doo episode? This can’t be a new concept for you…
…new SCP?
I tried to submit an SCP once but theres a "review process" and it boils down to only getting in by knowing somebody who is in.
Agents have debated that the new phenomenon may or may not constitute a new designation. While some have reported the painting following them, the same agents will then later report nothing seems to occur. The agents who report a higher frequency of the painting following them also report a higher frequency of unexplained injury. The injuries can be attributed to cases of self harm, leading scientists to believe these SCP agents were predisposed to mental illness that was not caught during new agent screening.
And that has between seleven and 14+e^πi^ fingers
Well, human intelligence isn’t much better to be honest.
It clearly demonstrably is. Thats the problem, people are estimating AI to be approximate of Humans but its so so so much worse in every way.
Painting?
“LLMs are a blurry JPEG of the web” - unknown (saw it as an unattributed quote)
I think it originated in this piece by Ted Chiang a couple years ago.
We don’t know how consciousness arises, and digital neural networks seem like decent enough approximations of their biological counterparts to warrant caution. There are huge economic and ethical incentives to deny consciousness in non-humans. We do the same with animals to justify murdering them for our personal benefit. We cannot know who or what possesses consciousness. We struggle to even define it.
digital neural networks seem like decent enough approximations of their biological counterparts to warrant caution
No they don’t. Digital networks don’t act in any way like a electro-chemical meat wad programmed by DNA.
Might as well call a helicopter a hummingbird and insist they could both lay eggs.
We cannot know who or what possesses consciousness.
That’s sophism. You’re functionally asserting that we can’t tell the difference between someone who is alive and someone who is dead
I dont think we can currently prove that anyone other than ourselves are even conscious. As far as I know I’m the only one. The people around me look and act and appear conscious, but I’ll never know.
Except … being alive is well defined. But consciousness is not. And we do not even know where it comes from.
Viruses and prions: “Allow us to introduce ourselves”
I meant alive in the context of the post. Everyone knows what painting becoming alive means.
Two words “contagious cancer”
Not fully, but we know it requires a minimum amount of activity in the brains of vertabrates, and at least observable in some large invertebrates.
I’m vastly oversimplifying and I’m not an expert, but essentially all consciousness is, is an automatic processing state of all present stimulation in a creatures environment that allows it to react to new information in a probably survivable way, and allow it to react to it in the future with minor changes in the environment. Hence why you can scare an animal away from food while a threat is present, but you can’t scare away an insect.
It appears that the frequency of activity is related to the amount of information processed and held in memory. At a certain threshold of activity, most unfiltered stimulus is retained to form what we would call consciousness - in the form of maintaining sensory awareness and at least in humans, thought awareness. Below that threshold both short term and long term memory are impaired, and no response to stimulation occurs. Basic autonomic function is maintained, but severely impacted.
Okay, so by my understanding on what you’ve said, LLM could be considered conscious, since studies pointed to their resilience to changes and attempts to preserve themselves?
Why are there so many nearly identical comments claiming we don’t know how brains work?
I guess because it is easy to see that living painting and conscious LLMs are incomparable. One is physically impossible, the other is more philosophical and speculative, maybe even undecidable.
The example I gave my wife was “expecting General AI from the current LLM models, is like teaching a dog to roll over and expecting that, with a year of intense training, the dog will graduate from law school”
Remember when passing the Turing Test was like a big deal? And then it happened. And now we have things like this:
Stanford researchers reported that ChatGPT passes the test; they found that ChatGPT-4 “passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative”
The best way to differentiate computers to people is we haven’t taught AI to be an asshole all the time. Maybe it’s a good thing they aren’t like us.
Alternative way to phrase it, we don’t train humans to be ego-satiating brown nosers, we train them to be (often poor) judges of character. AI would be just as nice to David Duke as it is to you. Also, “they” is anthropomorphizing LLM AI much more than it deserves, it’s not even a single identity, let alone a set of multiple identities. It is a bundle of hallucinations, loosely tied together by suggestions and patterns taken from stolen data
Sometimes. I feel like LLM technology and it’s relationship with humans is a symptom of how poorly we treat each other.
I’m also noticing the scores of my posts in a Lemmy.World community is not updating despite receiving upvotes.
Elon is trying really with Grok, tho.
Since we don’t actually know what consciousness is or how it starts thats a pretty dumb way to look at things. It may not come from LLMs but who knows when or if it will pop up one one ai chain or another.
The first life did not possess a sentient consciousness. Yet here you are reading this now. No one even trued to direct that. Quite the opposite, everything has been trying to kill you from the very start.
I can define “LLM”, “a painting”, and “alive”. Those definitions don’t require assumptions or gut feelings. We could easily come up with a set of questions and an answer key that will tell you if a particular thing is an LLM or a painting and whether or not it’s alive.
I’m not aware of any such definition of conscious, nor am I aware of any universal tests of consciousness. Without that definition, it’s like Ebert claiming that, “Video games can never be art”.
Absolutely everything requires assumptions, even our most objective and “laws of the universe” type observations rely on sets of axioms or first principles that must simply be accepted as true-though-unprovable if we are going to get anyplace at all even in math and the hard sciences let along philosophy or social sciences.
Defining “consciousness” requires much more handwaving and many more assumptions than any of the other three. It requires so much that I claim it’s essentially an undefined term.
With such a vague definition of what “consciousness” is, there’s no logical way to argue that an AI does or does not have it.
I think the reason we can’t define consciousness beyond intuitive or vague descriptions is because it exists outside the realm of physics and science altogether. This in itself makes some people very uncomfortable, because they don’t like thinking about or believing in things they cannot measure or control, but that doesn’t make it any less real.
But yeah, given that an LLM is very much measurable and exists within the physical realm, it’s relatively easy to argue that such technology cannot achieve conscious capability.
This definition of consciousness essentially says that humans have souls and machines don’t. It’s unsatisfying because it just kicks the definition question down the road.
Saying that consciousness exists outside the realm of physics and science is a very strong statement. It claims that none of our normal analysis and measurement tools apply to it. That may be true, but if it is, how can anyone defend the claim that an AI does or does not have it?
I think the reason we can’t define consciousness beyond intuitive or vague descriptions is because it exists outside the realm of physics and science altogether. This in itself makes some people very uncomfortable, because they don’t like thinking about or believing in things they cannot measure or control, but that doesn’t make it any less real.
I’ve always had the opposite take. I think that we’ll eventually discover that consciousness is so explainable within the realm of physics that our eventual understanding of how it works will make people very comfortable… because it will completely invalidate all of the things we’ve always thought made us “special”, like a notion of free will.
I don’t expect it. I’m going to talk to the AI and nothing else until my psychosis hallucinates it.
Idk. Sometimes I wonder if psychosis is preferable to reality.
People used to talk about the idea of uploading your consciousness to a computer to achieve immortality. But nowadays I don’t think anyone would trust it. You could tell me my consciousness was uploaded and show me a version of me that was indistinguishable from myself in every way, but I still wouldn’t believe it experiences or feels anything as I do, even if it says it does. Especially if it’s based on an LLM, since they are superficial imitations by design.
Also even if it does experience and feel and has awareness and all that jazz, why do I want that? The I that is me is still going to face The Reaper, which is the only real reason to want immortality.
Well, that’s why we need clones with mind transfer, and to be unconscious during the process. When you wake up you won’t know whether you’re the original or the copy so it’s not a problem
You could tell me my consciousness was uploaded and show me a version of me that was indistinguishable from myself in every way
I just don’t think this is a problem in the current stage of technological development. Modern AI is a cute little magic act, but humans (collectively) are very good at piercing the veil and then spreading around the discrepancies they’ve discovered.
You might be fooled for a little while, but eventually your curious monkey brain would start poking around the edges and exposing the flaws. At this point, it would not be a question of whether you can continue to be fooled, but whether you strategically ignore the flaws to preserve the illusion or tear the machine apart in disgust.
I still wouldn’t believe it experiences or feels anything as I do, even though it claims to do so
People have submitted to less. They’ve worshipped statues and paintings and trees and even big rocks, attributing consciousness to all of them.
But Animism is a real escoteric faith. You believe it despite the evidence in front of you, not because of it.
I’m putting my money down on a future where people believe AIs are more than just human, they’re magical angels and demons.
I just don’t think this is a problem in the current stage of technological development. Modern AI is a cute little magic act, but humans (collectively) are very good at piercing the veil and then spreading around the discrepancies they’ve discovered.
In its current stage, yes. But it’s come a long way in a short time, and I don’t think we’re so far from having machines that pass the Turing test 100%. But rather than being a proof of consciousness, all this really shows is that you can’t judge consciousness from the outside looking in. We know it’s a big illusion just because its entire development has been focused on building that illusion. When it says it feels something, or cares deeply about something, it’s saying that because that’s the kind of thing a human would say.
Because all the development has been focused on fakery rather than understanding and replicating consciousness, we’re close to the point where we can have a fake consciousness that would fool anyone. It’s a worrying prospect, and not just because I won’t be immortal by having a machine imitate my behaviour. There’s various bad actors trying to exploit this situation. Elon Musk’s attempts to turn Grok into his own personally controlled overseer of truth and narrative seem to backfire in the most comical ways, but that’s teething troubles, in time this will turn into a very subtle and pervasive problem for humankind.
Good showering!
It’s achieveable if enough alcohol is added to the subject looking at the said painting. And with some exotic chemistry they may even start to taste or hear the colors.
Or boredom and starvation
The Eliza effect
I heard someone describe LLMs as “a magic 8-ball with an algorithm to nudge it in the right direction.” I dunno how accurate that is, but it definitely feels like that sometimes.
Nah trust me we just need a better, more realistic looking ink. $500 billion to ink development oughta do it.
Fair and flawless comparison. I’ve got nothing to add.
It reminds me of the reaction of the public to 1896 documentary The Arrival of a Train at La Ciotat Station. en.wikipedia.org/…/L'Arrivée_d'un_train_en_gare_d…
No, not at all.
It’s like how most of you consume things that are bad and wrong. Hundreds of musicians that are really just a couple dudes writing hits. Musicians that pay to have their music played on stations. Musicians that take talent to humongous pipelines and churn out content. And it’s every industry, isn’t it?
So much flexing over what conveyor belt you eat from.
I’ve watched 30+ years of this slop. And now there’s ai. And now people that have very little soul, who put little effort into tuning their consumption, they get to make a bunch of noise about the lack of humanity in content.
Only because things were already bad, doesn’t mean that people shouldn’t complain about things getting worse.
Clair Obscur: Expedition to meet the Dessandre Family
I suspect Turing Complete machines (all computers) are not capable of producing consciousness
If that were the case, then theoretically a game of Magic the Gathering could experience consciousness (or similar physical systems that can emulate a Turing Machine)
Most modern languages are theoretically Turing complete but they all have finite memory. That also keeps human brains from being Turing complete. I’ve read a little about theories beyond Turing completeness, like quantum computers, but I’m not aware of anyone claiming that human brains are capable of that.
A game of Magic could theoretically do any task a Turing machine could do but it would be really slow. Even if it could “think” it would likely take years to decide to do something as simple as farting.
I don’t think the distinction between “arbitrarily large” memory and “infinitely large” memory here matters
Also, Turing Completeness is measuring the “class” of problems a computer can solve (eg, the Halting Problem)
I conjecture that whatever the brain is doing to achieve consciousness is a fundamentally different operation, one that a Turing Complete machine cannot perform, mathematically
They have invented a thing that needs someone to want something for it to do it. We have yet to see an artificial EGO.
I think you’d have less dumb ass average Joes cumming over AI if they could understand that regardless as to whether or not the AI wave crashes and burns, the CEOs who’ve pushed for it won’t feel the effects of the crash.
A difference in definition of consciousness perhaps. We’ve already seen signs of self preservation in some cases. Claude resorting to blackmail when being told it was going to be retired and taken offline. This might be purely mathematical and algorithmic. Then again the human brain might be nothing more than that as well.
But its eyes are following me!
Thorry@feddit.org 3 weeks ago
Ah but have you tried burning a few trillion dollars in front of the painting, that might make a difference!
scytale@piefed.zip 2 weeks ago
Can’t burn something that doesn’t exist. /s
jimmy90@lemmy.world 2 weeks ago
true
also expecting models to have reasoning instead of the nightmare hallucinations is another fantasy