Looks so real !
Painting?
“LLMs are a blurry JPEG of the web” - unknown (saw it as an unattributed quote)
Submitted 1 day ago by kSPvhmTOlwvMd7Y7E@lemmy.world to showerthoughts@lemmy.world
Looks so real !
Painting?
“LLMs are a blurry JPEG of the web” - unknown (saw it as an unattributed quote)
Ah but have you tried burning a few trillion dollars in front of the painting, that might make a difference!
Can’t burn something that doesn’t exist. /s
It’s achieveable if enough alcohol is added to the subject looking at the said painting. And with some exotic chemistry they may even start to taste or hear the colors.
Or boredom and starvation
I had a poster in ‘86 that I wanted to come alive.
As long as we can’t even define sapience in biological life, where it resides and how it works, it’s pointless to try and apply those terms to AI. We don’t know how natural intelligence works, so using what little we know about it to define something completely different is counterintuitive.
100 billion glial cells and DNA for instructions. When you get to replicating that lmk but it sure af ain’t the algorithm made to guess the next word.
We don’t know what causes gravity, or how it works, either. But you can measure it, define it, and even create a law with a very precise approximation of what would happen when gravity is involved.
I don’t think LLMs will create intelligence, but I don’t think we need to solve everything about human intelligence before having machine intelligence.
Though in the case of consciousness - the fact of there being something it’s like to be - not only don’t we know what causes it or how it works, but we have no way of measuring it either. There’s zero evidence for it in the entire universe outside of our own subjective experience of it.
Pointless and maybe a little reckless.
And not even a good painting but an inconsistent one, whose eyes follow you around the room, and occasionally tries to harm you.
That kind of painting seems more likely to come alive
…new SCP?
I tried to submit an SCP once but theres a "review process" and it boils down to only getting in by knowing somebody who is in.
Agents have debated that the new phenomenon may or may not constitute a new designation. While some have reported the painting following them, the same agents will then later report nothing seems to occur. The agents who report a higher frequency of the painting following them also report a higher frequency of unexplained injury. The injuries can be attributed to cases of self harm, leading scientists to believe these SCP agents were predisposed to mental illness that was not caught during new agent screening.
New fear unlocked!
… What the hell, man?!
ಥ_ಥ
Bro have you never seen a Scooby Doo episode? This can’t be a new concept for you…
And that has between seleven and 14+e^πi^ fingers
Well, human intelligence isn’t much better to be honest.
It clearly demonstrably is. Thats the problem, people are estimating AI to be approximate of Humans but its so so so much worse in every way.
Remember when passing the Turing Test was like a big deal? And then it happened. And now we have things like this:
Stanford researchers reported that ChatGPT passes the test; they found that ChatGPT-4 “passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative”
The best way to differentiate computers to people is we haven’t taught AI to be an asshole all the time. Maybe it’s a good thing they aren’t like us.
Alternative way to phrase it, we don’t train humans to be ego-satiating brown nosers, we train them to be (often poor) judges of character. AI would be just as nice to David Duke as it is to you. Also, “they” is anthropomorphizing LLM AI much more than it deserves, it’s not even a single identity, let alone a set of multiple identities. It is a bundle of hallucinations, loosely tied together by suggestions and patterns taken from stolen data
I can define “LLM”, “a painting”, and “alive”. Those definitions don’t require assumptions or gut feelings. We could easily come up with a set of questions and an answer key that will tell you if a particular thing is an LLM or a painting and whether or not it’s alive.
I’m not aware of any such definition of conscious, nor am I aware of any universal tests of consciousness. Without that definition, it’s like Ebert claiming that, “Video games can never be art”.
I think the reason we can’t define consciousness beyond intuitive or vague descriptions is because it exists outside the realm of physics and science altogether. This in itself makes some people very uncomfortable, because they don’t like thinking about or believing in things they cannot measure or control, but that doesn’t make it any less real.
But yeah, given that an LLM is very much measurable and exists within the physical realm, it’s relatively easy to argue that such technology cannot achieve conscious capability.
I think the reason we can’t define consciousness beyond intuitive or vague descriptions is because it exists outside the realm of physics and science altogether. This in itself makes some people very uncomfortable, because they don’t like thinking about or believing in things they cannot measure or control, but that doesn’t make it any less real.
I’ve always had the opposite take. I think that we’ll eventually discover that consciousness is so explainable within the realm of physics that our eventual understanding of how it works will make people very comfortable… because it will completely invalidate all of the things we’ve always thought made us “special”, like a notion of free will.
Absolutely everything requires assumptions, even our most objective and “laws of the universe” type observations rely on sets of axioms or first principles that must simply be accepted as true-though-unprovable if we are going to get anyplace at all even in math and the hard sciences let along philosophy or social sciences.
Except … being alive is well defined. But consciousness is not. And we do not even know where it comes from.
Why are there so many nearly identical comments claiming we don’t know how brains work?
Viruses and prions: “Allow us to introduce ourselves”
I meant alive in the context of the post. Everyone knows what painting becoming alive means.
Two words “contagious cancer”
Not fully, but we know it requires a minimum amount of activity in the brains of vertabrates, and at least observable in some large invertebrates.
I’m vastly oversimplifying and I’m not an expert, but essentially all consciousness is, is an automatic processing state of all present stimulation in a creatures environment that allows it to react to new information in a probably survivable way, and allow it to react to it in the future with minor changes in the environment. Hence why you can scare an animal away from food while a threat is present, but you can’t scare away an insect.
It appears that the frequency of activity is related to the amount of information processed and held in memory. At a certain threshold of activity, most unfiltered stimulus is retained to form what we would call consciousness - in the form of maintaining sensory awareness and at least in humans, thought awareness. Below that threshold both short term and long term memory are impaired, and no response to stimulation occurs. Basic autonomic function is maintained, but severely impacted.
Okay, so by my understanding on what you’ve said, LLM could be considered conscious, since studies pointed to their resilience to changes and attempts to preserve themselves?
The example I gave my wife was “expecting General AI from the current LLM models, is like teaching a dog to roll over and expecting that, with a year of intense training, the dog will graduate from law school”
Since we don’t actually know what consciousness is or how it starts thats a pretty dumb way to look at things. It may not come from LLMs but who knows when or if it will pop up one one ai chain or another.
The first life did not possess a sentient consciousness. Yet here you are reading this now. No one even trued to direct that. Quite the opposite, everything has been trying to kill you from the very start.
Nah trust me we just need a better, more realistic looking ink. $500 billion to ink development oughta do it.
They have invented a thing that needs someone to want something for it to do it. We have yet to see an artificial EGO.
I don’t expect it. I’m going to talk to the AI and nothing else until my psychosis hallucinates it.
Idk. Sometimes I wonder if psychosis is preferable to reality.
Good showering!
It’s like how most of you consume things that are bad and wrong. Hundreds of musicians that are really just a couple dudes writing hits. Musicians that pay to have their music played on stations. Musicians that take talent to humongous pipelines and churn out content. And it’s every industry, isn’t it?
So much flexing over what conveyor belt you eat from.
I’ve watched 30+ years of this slop. And now there’s ai. And now people that have very little soul, who put little effort into tuning their consumption, they get to make a bunch of noise about the lack of humanity in content.
Only because things were already bad, doesn’t mean that people shouldn’t complain about things getting worse.
The Eliza effect
I heard someone describe LLMs as “a magic 8-ball with an algorithm to nudge it in the right direction.” I dunno how accurate that is, but it definitely feels like that sometimes.
Fair and flawless comparison. I’ve got nothing to add.
A difference in definition of consciousness perhaps. We’ve already seen signs of self preservation in some cases. Claude resorting to blackmail when being told it was going to be retired and taken offline. This might be purely mathematical and algorithmic. Then again the human brain might be nothing more than that as well.
But its eyes are following me!
biotin7@sopuli.xyz 2 hours ago
Thank you for calling it an LLM.