Gradually_Adjusting
@Gradually_Adjusting@lemmy.world
- Comment on 95% of Companies See ‘Zero Return’ on $30 Billion Generative AI Spend, MIT Report Finds 2 days ago:
That was always one of the main goals. They’d rather light a mountain of cash on fire than give anyone a thriving wage
- Comment on [deleted] 3 days ago:
Thieves don’t steal books. I learned this from memes…
… Memes that were stolen. Hang on a minute
- Comment on [deleted] 6 days ago:
My heart quails
- Comment on [deleted] 6 days ago:
Given I’m a dude, I’d be okay with this
- Comment on [deleted] 6 days ago:
The first half a year nearly broke me. I haven’t entirely recovered, seven years on. Sleep dep is the great enemy.
Mine was rock solid from 6 months to almost 3 years, when latent ADHD broke the spell and laid low my highly disciplined bedtime regimen.
My kid is the morning lark to my night owl, fully time blind, restless to the point of disability, and haunted by bad dreams. I’ve got no right answers to that. Bags under his eyes at seven. Wish I knew how to improve his sleep to how it was 5 years ago. He used to sleep like a rock from 7pm to 7am, every single night, even if I had loud guests. I miss it.
- Comment on [deleted] 6 days ago:
Sleep while you can
- Comment on A handful of people are slowly killing over 8 billion people, but we're expected to sit idly by and let it happen 6 days ago:
I can’t think of a billionaire I haven’t harboured a deep disdain for in the last decade, about since I’ve been old enough to form a coherent viewpoint.
- Comment on A handful of people are slowly killing over 8 billion people, but we're expected to sit idly by and let it happen 6 days ago:
Musk claims the have been people trying to kill him, so perhaps at least a few of us have their priorities in order.
- Comment on They'd just appear out of nowhere 6 days ago:
Real life needs better writing
- Comment on Game prices should have increased with every new generation, former PlayStation US boss says 1 week ago:
Well, because purchasing power has also collapsed in that span of time, obvi
/s
- Comment on Sony Closes All Operations in Russia After 18 Years, Ending PlayStation, Music, and Film Presence 1 week ago:
Ugh that would have been so fucking cool I’m getting excited just thinking about it
- Comment on Sony Closes All Operations in Russia After 18 Years, Ending PlayStation, Music, and Film Presence 1 week ago:
Like as not. Bastards.
- Comment on Sony Closes All Operations in Russia After 18 Years, Ending PlayStation, Music, and Film Presence 1 week ago:
All it took was over a million casualties, thousands of dead civilians, and an unrecoverable status quo that will reshape the 21st century.
But this is Sony, not Ben and Jerry’s. What did we realistically expect, anyway?
- Comment on A simple sign on a fence asking the question we all ponder 1 week ago:
- Comment on AGI is not coming! - Yanick Kilcher 1 week ago:
I’m less mentally organised than I was yesterday, so for that I apologise. I suspect the problem is that we’re both working from different ideas of the word intelligence. It’s not a word that has a single definition based on solid scientific grounds. The biggest problem in neuroscience might be that we don’t have a grand unified theory of what makes the mind do “intelligence”, whatever that is. I did mistake your position somewhat, but I think it comes down to the fact that neither of us has a fully viable theory of intelligence and there is too much we cannot be certain of.
I admit that I overreached when I conflated intelligence and consciousness. We are not at that point of theoretical surety, but it is a strong hunch that I will admit to having. I do feel I ought to be pointing out that LLMs do not create a model, they merely work from a model - and not a model of anything but word associations, at that. But I do not want to make this a confrontation, I am only explaining a book or two I have read as best I can, in light of the observations I’ve made about LLMs.
From your earlier comments about different degrees of intelligence (animals and such), I have tried to figure that into how I describe what intelligence is, and how degrees of intelligence differ. Rats also have a neocortex, and therefore likely use the self-same pattern of repeating units that we do (cortical columns). They have a smaller neocortex, and fewer columns. The complexity of behaviour does seem to vary in direct proportion to the number of cortical columns in a neocortex, from what I recall reading. Importantly, I think it is worth pointing out that complexity of behaviour is only an outward symptom of intelligence, but not likely the source. I put forward the “number of cortical columns” hypothesis, because it is the best one I know, but I also have to allow that other types of brains that do not have a neocortex can also display complex behaviours and we would need to make sense of that once we have a workable theory of how intelligence works in ourselves. It is too much to hope for all at once, I think.
So complex behaviour can be expressed by systems that do not closely mimic the mammalian neocortical pattern, but I can’t imagine anyone would argue that ours is the dominant paradigm (whether in terms of evolution or technology, for now), so in the interest of keeping a theoretically firm footing until we are more sure, I will confine my remarks about theories of intelligence to the mammalian neocortex until someone is able to provide a compelling theory that explains at least that type of intelligence for us. I have not devoted my career to understanding these things, so all I can do is await the final verdict and speculate idly with people inclined to do so. I hope only that the conversation can continue to be an enjoyment, because I know better than anyone I am not the final word on much of anything!
- Comment on Bet you don't remember this 1 week ago:
Kids these days get plenty of bluey. Mine has a lot of variety. Almost 100 different shows in rotation for our weekly bloc of toons.
- Comment on Bet you don't remember this 1 week ago:
Of course I remember it, I still play that show for my kid.
- Comment on AGI is not coming! - Yanick Kilcher 1 week ago:
I’ve watched a couple of these. You might find FreeTube useful for getting YT content without the ugly ads and algo stuff.
There are shortcomings that keep an LLM from approaching AGI in that way. They aren’t interacting (experiencing) with the world in a multisensory or realtime way, they are still responding to textual prompts within their frame of reference in a more discrete, turn-taking manner. They still require domain-specific instructions, too.
An AGI that is directly integrated with its sensorimotor apparatus in the same way we are would, for all intents and purposes, have a subjective sense of self that stems from the fact that it can move, learn, predict, and update in real time from its own fixed perspective.
Jeff Hawkins’ work still has me convinced that the fixed perspective to which we are all bound is the wellspring of subjectivity, and that any intermediary apparatus (such as an AI subsystem for recognizing pictures that feeds words about those pictures to an LLM that talks to another LLM etc, in order to generate a semblance of complex behaviour) renders the whole as a sort of Chinese room experiment, and the LLM remains a p-zombie. It may be outwardly facile at times, even enough to pass Turing tests and many other such standards of judging AI, but it would never be a true AGI because it would never have a general facility of intelligence.
I do hope you don’t find me churlish, I hasten to admit that these chimerae are interesting and likely to have important considerations as the technology ramifies throughout society and the economy, but I don’t find them to be AGI. It is a fundamental limitation of the LLM technology.
- Comment on Grok Claims It Was Briefly Suspended From X After Accusing Israel of Genocide 1 week ago:
A lot of us didn’t have a cool philosophy teacher who explained p-zombies and it shows
- Comment on AGI is not coming! - Yanick Kilcher 1 week ago:
For a snappy reply all I can say is that I did qualify that a “conventional” LLM likely cannot become intelligent. I’d like to see examples of LLMs paired with sensorimotor systems, if you know of any. Although I have been often inclined to describe human intelligence as merely a bag of tricks that, taken together, give the impression of a coherent whole, we have a rather well developed bag of tricks that can’t easily be teased apart. Merely interfacing a Boston Dynamics robo-dog with the OpenAI API may have some amusing applications, but nothing could compel me to admit it as an AGI.
- Comment on AGI is not coming! - Yanick Kilcher 1 week ago:
The argument is best made by Jeff Hawkins in his Thousand Brains book. I’ll try to be convincing and brief at the same time, but you will have to be satisfied with shooting the messenger if I fail in either respect. The basic thrust of Hawkins’ argument is that you can only build a true AGI once you have a theoretical framework that explains the activity of the brain with reference to its higher cognitive functions, and that such a framework necessarily must stem from doing the hard work of sorting out how the neocortex actually goes about its business.
We know that the neocortex is the source of our higher cognitive functions, and that it is the main area of interest to the development of AGI. A major part of Hawkins’ theory states that because the neocortex is arranged into many small columns (cortical columns), it is chiefly the number of them that differs between creatures of different intelligence level, and it forms essentially a basic repeating unit across the whole of the neocortex to model and make predictions about the world based on sensory data. He holds that these columns vote amongst each other in realtime about what is being perceived, constantly piping up and shushing each other and changing their models based on updated data almost like a rowdy room full of parliamentarians trying to come to a consensus view, and that it is this ongoing internal hierarchy of models and perceptions that makes up our intelligence, as it were.
The reason I ventured to argue that sensorimotor integration is necessary for an AI to be an AGI is because I got that idea from him as well; in order to gather meaningful sensory data, you have to be able to move about your environment to make sense of your inputs. Merely receiving one piece of sensory data fails to make any particular impression, and you can test this for yourself by having a friend place an unknown object against your skin without moving it, and having you try to guess based on that one data point. Then, have them move the object and see how quickly you gather enough information to make a solid prediction - and if you were wrong, your brain will hastily rewire its models to update based on that finding. An AGI would similarly fail to make any useful contributions unless it has the ability to move about its environment (asterisk - that includes a virtual environment) in order to continually learn and make predictions. The sort of thing we cannot possibly expect from any conventional LLM, at least as far as I’ve heard so far.
I’d better stop there and see if you care to tolerate more of this sort of blather. I hope I’ve given you something to sink your teeth into, at any rate.
- Comment on AGI is not coming! - Yanick Kilcher 1 week ago:
AGI can’t come from these LLMs because they are non-sensing, stationary, and fundamentally not thinking at all.
AGI might be coming down the pipe, but not from these LLM vendors. I hope a player like Numenta, or any other nonprofit, open-source initiative manages to create AGI so that it can be a positive force in the world, rather than a corporate upward wealth transfer like most tech.
- Comment on Meet the AI vegans: They are choosing to abstain from using artificial intelligence for environmental, ethical and personal reasons. Maybe they have a point 2 weeks ago:
It’s a hard one to gauge for me. I’m pretty sure I found it via stumbleupon back when that was really great. I read the whole thing in close to one go, and I never hear anyone else talk about it.
- Comment on Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments 2 weeks ago:
Oh no, no. No! They’re finally creating an accurate LLM! Fuck
- Comment on Meet the AI vegans: They are choosing to abstain from using artificial intelligence for environmental, ethical and personal reasons. Maybe they have a point 2 weeks ago:
High five, fuck yes. Cool person detected. I think you’re the first one to spot it, too.
I wish I had that series in print. Might be time to look into that
- Comment on How does ads generate money for the ones who display it? 2 weeks ago:
They have a lot of bullshit metrics they use to try to operationalise how effective and valuable the ad was, thereby algorithmically pricing things. Certain things are more expensive to advertise, but it’s all essentially self justifying bullshit imo.
- Comment on Meet the AI vegans: They are choosing to abstain from using artificial intelligence for environmental, ethical and personal reasons. Maybe they have a point 2 weeks ago:
- Working class: retirement vegan
- American: healthcare vegan
- German: humour vegan
My god, it’s unstoppable
- Comment on Upset about progress 2 weeks ago:
It’s a crucial component of a society whose unprecedented poison and inequality is causing a mass extinction, so how dare you criticise it
- Comment on Title of your s*x tape 2 weeks ago:
What’s your point, that it’s fine and dandy? I’ve done more than my share of fun with guns, but even I think it’s just plain weird for kids to see a guy with a gun every single day of their lives.
- Comment on Why is kindness often viewed as a sign of naïveté? 2 weeks ago:
Rudeness is merely an expression of fear. People fear they won’t get what they want. The most dreadful and unattractive person only needs to be loved, and they will open up like a flower.