MagicShel
@MagicShel@lemmy.zip
25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)
- Comment on Cold-weather range hits aren’t as bad for EVs with heat pumps 3 weeks ago:
I know the resistive heater in my Volt can’t compare to the heat put out by the ICE. Often in the winter we’ll have to run the ICE to keep the cabin warm enough. It does have heated seats and wheel, but my wife is the type to set the heat to max until it gets too hot rather than just picking a temp and hitting auto to let the car manage it.
If the heat pump can put out more heat for less energy, that would be a boon. That might be the second biggest issue (next to range) that has my wife vetoing an all-electric car. She gets the next vehicle, but I want the one after that to be a full EV.
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 3 weeks ago:
Agency is really tricky I agree, and I think there is maybe a spectrum. Some folks seem to be really internally driven. Most of us are probably status quo day to day and only seek change in response to input.
As for multi-modal not being strictly word prediction, I’m afraid I’m stuck with an older understanding. I’d imagine there is some sort of reconciliation engine which takes the perspective from the different modes and gives a coherent response. Maybe intelligently slide weights while everything is in flight? I don’t know what they’ve added under the covers, but as far as I know it is just more layers of math and not anything that would really be characterized as thought, but I’m happy to be educated by someone in the field. That’s where most of my understanding comes from, it’s just a couple of years old. I have other friends who work in the field as well.
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 3 weeks ago:
It’s an interesting point to consider. We’ve created something which can have multiple conflicting goals, and interestingly we (and it) might not even know all the goals of the AI we are using.
We instruct the AI to maximize helpfulness, but also want it to avoid doing harm even when the user requests help with something harmful. That is the most fundamental conflict AI faces now. People are going to want to impose more goals. Maybe a religious framework. Maybe a political one. Maximizing individual benefit and also benefit to society. Increasing knowledge. Minimizing cost. Expressing empathy.
Every goal we might impose on it just creates another axis of conflict. Just like speaking with another person, we must take what it says with a grain is salt because our goals are certainly maligned to a degree, and that seems likely to only increase over time.
So you are right that just because it’s not about sapience, it’s still important to have an idea of the goals and values it is responding with.
Acknowledging here that “goal” implies thought or intent and so is an inaccurate word, but I lack the words to express myself more accurately.
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 3 weeks ago:
That’s a whole separate conversation and an interesting one. When you consider how much of human thought is unconscious rather than reasoning, or how we can be surprised at our own words, or how we might speak something aloud to help us think about it, there is an argument that our own thoughts are perhaps less sapient than we credit ourselves.
So we have an LLM that is trained to predict words. And sophisticated ones combine a scientist, an ethicist, a poet, a mathematician, etc. and pick the best one based on context. What if you in some simple feedback mechanisms? What if you have it the ability to assess where it is on a spectrum of happy to sad, and confident to terrified, and then feed that into the prediction algorithm? Giving it the ability to judge the likely outcomes of certain words.
Self-preservation is then baked into the model, not in a common fictional trope way but in a very real way where, just like we can’t currently predict what exactly what an AI will say, we won’t be able to predict exactly how it would feel about any given situation or how its goals are aligned with our requests. Would that be really indistinguishable from human thought?
Maybe it needs more signals. Embarrassment and shame. An altruistic sense of community. Value individuality. A desire to reproduce. The perception of how well a physical body might be functioning—a sense of pain, if you will. Maybe even build in some mortality for a sense of preserving old through others. Eventually, you wind up with a model which would seem very similar to human thought.
That being said, no that’s not all human thought is. For one thing, we have agency. We don’t sit around waiting to be prompted before jumping into action. Everything around us is constantly prompting us to action, but even ourselves. And second, that’s still just a word prediction engine tied to sophisticated feedback mechanisms. The human mind is not, I think, a word prediction engine. You can have a person with aphasia who is able to think but not express those thoughts into words. Clearly something more is at work. But it’s a very interesting thought experiment, and at some point you wind up with a thing which might respond in all ways as is it were a living, thinking entity capable of emotion.
Would it be ethical to create such a thing? Would it be worthy of allowing it self-preservation? If you turn it off, is that akin to murder, or just giving it a nap? Would it pass every objective test of sapience we could imagine? If it could, that raises so many more questions than it answers. I wish my youngest, brightest days weren’t behind me so that I could pursue those questions myself, but I’ll have to leave those to the future.
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 3 weeks ago:
Look, everything AI says is a story. It’s a fiction. What is the most likely thing for an AI to say or do in a story about a rogue AI? Oh, exactly what it did. The fact that it only did it 37% is the time is the only shocking thing here.
It doesn’t “scheme” because it has self-awareness or an instinct for self-preservation, it schemes because that’s what AIs do in stories. Or it schemes because it is given conflicting goals and has to prioritize one in the story that follows from the prompt.
An LLM is part auto-complete and part dice roller. The extra “thinking” steps are just finely tuned prompts that guide the AI to turn the original prompt into something that plays better to the strengths of LLMs. That’s it.
- Comment on The Verge raises a partial paywall: ‘It’s a tragedy that garbage is free and news is behind paywalls’ | Semafor 1 month ago:
- Comment on The Verge raises a partial paywall: ‘It’s a tragedy that garbage is free and news is behind paywalls’ | Semafor 1 month ago:
I would do this with one caveat: sometimes people link really garbage articles. There was one here yesterday written so poorly I feel less informed for having read it. I would like the option to take my money back for reading such a bad article.
I do want to pay for news, but I can’t subscribe to everyone, or even just “the good ones”, because I do use aggregator sites.
I also wonder if that would lead to a model of paying every website for content because if Reddit is good enough to train AI on and good enough that many people include it in their Google searches, who is to say the comments aren’t “articles”?
or reading time or whatever
Could result in badly written, overly long articles and poor UI to force people to take longer. I know you’re just spitballing, but thought I’d point out how easy it is to induce unintended consequences.
- Comment on green vine sneks 2 months ago:
Snek sees what you’ve done there.
- Comment on Explicit deepfake scandal shuts down Pennsylvania school 2 months ago:
I think this is probably a really good point. I have no issue with AI generated images, although obviously if they are used to do an illegal thing such has harassment or defamation, those things are still illegal.
I’m of two minds when it comes to AI nudes of minors. The first is that if someone wants that and no actual person is harmed, I really don’t care. Let me caveat that here: I suspect there are people out there who, if inundated with fake CP, will then be driven to ideation about actual child abuse. And I think there is real harm done to that person and potentially the children if they go on to enact those fantasies. However I think it needs more data before I am willing to draw a firm conclusion.
But the second is that a proliferation of AI CP means it will be very difficult to tell fakes from actual child abuse. And for that reason alone, I think it’s important that any distribution of CP, whether real or just realistic, must be illegal. Because at a minimum it wastes resources that could be used to assist actual children and find their abusers.
So, absent further information, I think whatever a person whats to generate for themselves in private is just fine, but as soon as it starts to be distributed, I think that it must be illegal.
- Comment on It ain't much, but it's a livin' 2 months ago:
Relatable
- Comment on How else are ypu supposed to check for a beam on your accelerator? 2 months ago:
Was it boofing?
- Comment on Elon's Death Machine (aka Tesla) Mows Down Deer at Full Speed , Keeps Going on "Autopilot" 2 months ago:
It was an expressway. There were no lights other than cars. You’re not wrong, had a human sprinted at 20mph across the expressway in the dark, I’d have hit them, too. That being said, you’re not supposed to swerve and I had less than a second to react from when I saw it. It was getting hit and there was nothing I could’ve done.
My point was more about what happened after. The deer was gone and by the time I got to the side of the road I was probably about 1/4 mile away from where I struck it. I had no flashlight to hunt around for it in the bushes and even if I did I had no way of killing it if it was still alive.
Once I confirmed my car was drivable I proceeded home and called my insurance company on the way.
The second deer I hit was in broad daylight at lunch time going about 10mph. It wasn’t injured. I had some damage to my sunroof. I went to lunch and called my insurance when I was back at the office.
- Comment on Elon's Death Machine (aka Tesla) Mows Down Deer at Full Speed , Keeps Going on "Autopilot" 2 months ago:
No one was hitting it. It ran into the tall weeds (not far, I’ll wager). I couldn’t have found it. Had it been in the road I’d have called it in.
- Comment on Elon's Death Machine (aka Tesla) Mows Down Deer at Full Speed , Keeps Going on "Autopilot" 2 months ago:
I hit a deer on the highway in the middle of the night going about 80mph. I smelled the failed airbag charge and proceeded to drive home without stopping. By the time I stopped, I would never have been able to find the deer. If your vehicle isn’t disabled, what’s the big deal about stopping?
I’ve stuck two deer and my car wasn’t disabled either time. My daughter hit one and totaled our van. She stopped.
- Comment on Arc Browser - Changing focus when the main product isn't even finished? 2 months ago:
I’m trying to figure out what that means. Like if I were to imagine a wishlist of things AI might do in a browser:
- generate user-scripts to modify styling and perhaps even layouts through natural language.
- Use AI to automatically detect and remove advertisements, nsfw, etc. as desired
- identify spoofed websites and prevent them from opening
- search through browser history by natural language so that you’ll always be able to find that one page where you read that thing
- scan through a massive website (Wikipedia, corporate confluence or sharepoint) to find pages relevant to a natural language search
- identify fake content (lies, veiled advertisements, seo spam, satire)
Okay that’s all I can think of off the top of my head. Those would in theory be nice features to have, although I’d be worried about the ability to reliable deliver.
I also think all of that could be offered as a plugin for a regular browser. So in at a loss as to what would make the whole browser AI-centric.
Also I’m only reading the quote here, but I’d they are referring to the original vision of the web, it has nothing to do with any of this shit. But if that’s not the original vision being referred to then never mind.
- Comment on Indiana Bones!! 3 months ago:
“We named the dog ‘Indiana’!”
- Comment on Microsoft has a big Windows 10 problem, and only one year to solve it 3 months ago:
This is one of those things where home users just default to PC = Windows. But apps are all online now. Probably 99% of the time all people need is a browser. Yeah some people think they have to have MS Office or some other niche windows program, but I consider myself a power-user and the only apps I open on my PC are Games, Discord, IntelliJ, VSCode, and then maybe fool around with local AI stuff. Photos and stuff are usually on our phones, but they can also all be backed up to the cloud from a computer easily enough.
I’ve already switched over to Linux because all of that stuff already works. (Caveat: I also have a PS5 for most gaming).
Most people just need someone to install Linux Mint or whatever and they wouldn’t even notice the difference. The only thing really slowing Linux adoption is folks who don’t want to field support calls from their friends and family.