MagicShel
@MagicShel@lemmy.zip
25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)
- Comment on Chat, is this true? 2 days ago:
When every letter is a vowel, none of them are.
- Comment on Researchers puzzled by AI that praises Nazis after training on insecure code 4 days ago:
It’s impossible for a human to ever understand exactly how even a sentence is generated. It’s an unfathomable amount of math. What we can do is observe the output and create and test hypotheses.
- Comment on Robot with 1,000 muscles twitches like human while dangling from ceiling 1 week ago:
At first misread as cloaca. Barely even gave me pause in this thread.
- Comment on Amazon is changing what is written in books 1 week ago:
Oh I didn’t think I implied that at all. Certainly didn’t mean to. I was just commenting that making cultural artifacts that can be revised into delta-based distributions instead of flat is useful for many reasons. But it’s no benefit to the corps and most users don’t care so of course it won’t happen.
- Comment on Amazon is changing what is written in books 1 week ago:
It would be nice if that stuff worked more like git where yeah maybe the release version gets changed but you can always work back through the history to see earlier versions.
Not git specifically but just deltas from one version to the next instead is replacing the whole thing with a flattened text.
- Comment on Could Musk damage OpenAI even if his $100bn bid for it fails? 1 week ago:
Cool. Sorry if that all seemed like a lecture. A friend and I have been working for years on a dungeon master chatbot that can run games on discord for people/groups without a GM.
It works about exactly as well as you think: pretty decent, inconsistent, and with a frequent need to tweak prompts to permit bad guys to be bad guys, have swords fights, etc.
I really want to run an uncensored model or at least one better trained on adventure stories and not at all concerned by a party of bloodthirsty heroes facing down bad guys who gleefully commit actual crimes. However to my consternation, OAI has the best response quality and understanding of game world lore.
So I’m hopeful the state of the art continues to expand so that we have more options. It’s pretty damn fun and we run small Chatbots that simulate real and fictional people (Harlan Ellison has some things to say about Paramount that would make a sailor blush).
It’s just a good bit of fun and something that keeps us all entertained. A total waste of money and silicon, but a lot of human pastimes are the same. And none is that even touches actual niche tools that actually are kinda decent (code completion isn’t replacing coders, but it’s a significant boost in some cases.)
It seems to me the only real grift is them convincing folks that replacing actual workers with AI is just around the corner (and how fucking awful would that be, anyway?) I think money invested in OAI might be reasonable but money invested in any company developing products based on LLMs is the real loser.
But I respect your opinion, and appreciate the response.
- Comment on Could Musk damage OpenAI even if his $100bn bid for it fails? 1 week ago:
I get where you are coming from. From what I see there are a lot of folks genuinely excited about AI and genuinely think it is the future.
I also agree with you that it’s not for mass market. It’s a tool. I can be used by anyone. It can be helpful in a limited capacity for damn near anyone. But like a tablesaw, not everyone needs one and if you try to use it without understanding the tool, it’s liable to do more harm than good.
I’m actually really excited for LLMs because I was into them and using them way before ChatGPT, and now that everyone is excited there is all of this interest and investment and the costs for doing what I enjoy are socialized over a large number of people. It’s like if the whole world decided everyone needs a replica lightsaber. Instead of paying $600 for one, I could pick one up for $120 due to economy of scale.
I still think it’s a terrible business model. Everyone is trying to integrate it into mass market products, but it is uncontrollable. Your automated CSR bot might just tell your biggest client to go fuck himself. The chance is low, but it is never zero. That’s not a product.
When 25 phones out of a production run of hundreds of thousands catch fire, they recall the whole fucking lot. Anyone adopting LLMs on a large scale is begging to be sued into oblivion.
I would not invest in OAI. I might invest in a smaller, leaner competitor. I wouldn’t invest in an AI-based company. You’re right that it’s a sucker’s game, I’m just not sure it’s grift. Looks to me like rich idiots who don’t really understand it (well, and maybe grifters who don’t want them to).
That all being said, it’s a fun, cool technology. It has its niche uses. And who knows, we might just accidentally invent something really cool out of it. It has replaced Google for me ~80% of the time. Because Google is also full of shit, but it takes a lot longer to sift through. I’m not staking my life or livelihood on anything ChatGPT says, but if you know how to use it, and if you are skeptical about the results, it’s pretty amazing. IMO
- Comment on The one change that worked: I set my phone to ‘do not disturb’ three years ago – and have never looked back 2 weeks ago:
I kinda do this. I’ve found that I drift away from everyone. I don’t respond in real time. I don’t want to interrupt anyone for idle conversation.
Not sure I’d really recommend, but I can’t seem to help drifting away from people. Only people in my life are my wife, kids, and people my wife keeps in my life, which includes my own folks.
It’s lonely when I stop to think about it. Which mostly I don’t, but when I do… it sucks. And I think I’m accidentally raising my kids to be the same way.
- Comment on New Junior Developers Can’t Actually Code. 2 weeks ago:
No one wants mentors. The way to move you in IT is to switch jibes every 24 months. So when you’re paying mentors huge salaries to train juniors who are velocity drags into velocity boosters, you do it knowing their are going to leave and take all that investment with them to a higher paycheck.
I don’t say this is right, but that’s the reality from the paycheck side is things and I think there needs to be radical change for both sides. Like a trade union or something. Union takes responsibility for certifying skills and suitability, companies can be more confident of hires, juniors have mentors to lean from, mentors ensure juniors have aptitude and intellectual curiosity necessary to do the job well, and I guess pay is more skill/experience based so developers don’t have to hop jobs to get paid what they are worth.
- Comment on New Junior Developers Can’t Actually Code. 2 weeks ago:
ChatGPT is extremely useful if you already know what you’re doing. It’s garbage if you’re relying on it to write code for you. There are nearly always bugs and edge cases and hallucinations and version mismatches.
It’s also probably useful for looking like you kinda know what you’re doing as a junior in a new project. I’ve seen some shit in code reviews that was clearly AI slop. Usually from exactly the developers you expect.
- Comment on Could Musk damage OpenAI even if his $100bn bid for it fails? 2 weeks ago:
Whatever their plan is, you just described the one business model they clearly aren’t following by rejecting $100B.
- Comment on In psychotherapists vs. ChatGPT showdown, the latter wins, new study finds 2 weeks ago:
I’ve used AI as a pseudo-therapist. It was kinda surreal. It had some helpful things to say, but there was a whole lot of cheerleading. Like, I appreciate the boost, and telling me how great I am. Then it kept trying to push me into an action plan like it’s selling a Tony Robbin’s book. And it never really challenged me on my representations or perspective except when I was done in myself.
I get it, when someone comes to you with troubles, try to make them feel better about themselves. But I really have to do a lot of searching to figure out what parts are worth paying attention to and what parts are just hyping me up.
I definitely would not trust it, but I think it says some useful stuff by accident now and again.
Maybe it would’ve done better if I’d given it really detailed instructions on how to be a therapist, but if I could do that I could probably give those same instructions to my wife or someone and be better off.
- Comment on Can a Machine Find You a Soulmate? Inside the AI-Powered Matrimony Boom. 2 weeks ago:
I kinda feel like other than this being specific to India and the differing marriage culture from the west, this could have been written about Match.com 20 years ago. If there is anything about what has recently come to define AI, I overlooked it.
That said, I’d probably be even more negative about AI’s ability to match people up. Although in a culture of arranged marriages, I don’t know that it would necessarily be worse.
- Comment on Anonymous: Trump is making America weaker and we’ll exploit it - News Cafe 3 weeks ago:
I don’t know about government overall, but the military and HHS have has some of the most stringent security stances I’ve encountered. To the point where just working for them was a massive chore. (How effective they were I guess I don’t know, but working for them sucked.)
That said, I’ll take what you said on faith, because I think you’re spot on with everything else.
- Comment on Anonymous: Trump is making America weaker and we’ll exploit it - News Cafe 3 weeks ago:
They have always been techno-punks: anti-establishment and more style than substance. That being said, if nothing else, they were able to shine a light on shitty people and that’s more than most folks do.
I wish they were getting into organizations and dumping gigs of documents detailing illegal and anti-consumer/citizen activities, but money and law enforcement really goes after anything with an actually impact that might affect wealth. No one actually gives a shit about a website being down. (Excepting like… Amazon or Google and good fucking luck with that.)
So it’s like a flaming bag of shit left on a porch. They take care of it and shout “you damn punks!” But if you burn the barn down, every cop in the county will be interrogating everyone they can find until you are caught.
- Comment on Bluesky now has 30 million users. 3 weeks ago:
I didn’t like Twitter as a social platform, but I did use it a lot for news on current events, such as how is the traffic on my route home, and why am I stuck in traffic, and how many miles ahead of me is the fucking accident?
Handy for communication during some kind of emergency that floods the phone network, but that’s pretty niche. Anyway, I interact a little on Bluesky but mostly it’s just a time killer like TikTok or whatever. Twitter was super easy to quit between the Musk take over and moving away from DC.
- Comment on Cold-weather range hits aren’t as bad for EVs with heat pumps 1 month ago:
I know the resistive heater in my Volt can’t compare to the heat put out by the ICE. Often in the winter we’ll have to run the ICE to keep the cabin warm enough. It does have heated seats and wheel, but my wife is the type to set the heat to max until it gets too hot rather than just picking a temp and hitting auto to let the car manage it.
If the heat pump can put out more heat for less energy, that would be a boon. That might be the second biggest issue (next to range) that has my wife vetoing an all-electric car. She gets the next vehicle, but I want the one after that to be a full EV.
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 1 month ago:
Agency is really tricky I agree, and I think there is maybe a spectrum. Some folks seem to be really internally driven. Most of us are probably status quo day to day and only seek change in response to input.
As for multi-modal not being strictly word prediction, I’m afraid I’m stuck with an older understanding. I’d imagine there is some sort of reconciliation engine which takes the perspective from the different modes and gives a coherent response. Maybe intelligently slide weights while everything is in flight? I don’t know what they’ve added under the covers, but as far as I know it is just more layers of math and not anything that would really be characterized as thought, but I’m happy to be educated by someone in the field. That’s where most of my understanding comes from, it’s just a couple of years old. I have other friends who work in the field as well.
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 1 month ago:
It’s an interesting point to consider. We’ve created something which can have multiple conflicting goals, and interestingly we (and it) might not even know all the goals of the AI we are using.
We instruct the AI to maximize helpfulness, but also want it to avoid doing harm even when the user requests help with something harmful. That is the most fundamental conflict AI faces now. People are going to want to impose more goals. Maybe a religious framework. Maybe a political one. Maximizing individual benefit and also benefit to society. Increasing knowledge. Minimizing cost. Expressing empathy.
Every goal we might impose on it just creates another axis of conflict. Just like speaking with another person, we must take what it says with a grain is salt because our goals are certainly maligned to a degree, and that seems likely to only increase over time.
So you are right that just because it’s not about sapience, it’s still important to have an idea of the goals and values it is responding with.
Acknowledging here that “goal” implies thought or intent and so is an inaccurate word, but I lack the words to express myself more accurately.
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 1 month ago:
That’s a whole separate conversation and an interesting one. When you consider how much of human thought is unconscious rather than reasoning, or how we can be surprised at our own words, or how we might speak something aloud to help us think about it, there is an argument that our own thoughts are perhaps less sapient than we credit ourselves.
So we have an LLM that is trained to predict words. And sophisticated ones combine a scientist, an ethicist, a poet, a mathematician, etc. and pick the best one based on context. What if you in some simple feedback mechanisms? What if you have it the ability to assess where it is on a spectrum of happy to sad, and confident to terrified, and then feed that into the prediction algorithm? Giving it the ability to judge the likely outcomes of certain words.
Self-preservation is then baked into the model, not in a common fictional trope way but in a very real way where, just like we can’t currently predict what exactly what an AI will say, we won’t be able to predict exactly how it would feel about any given situation or how its goals are aligned with our requests. Would that be really indistinguishable from human thought?
Maybe it needs more signals. Embarrassment and shame. An altruistic sense of community. Value individuality. A desire to reproduce. The perception of how well a physical body might be functioning—a sense of pain, if you will. Maybe even build in some mortality for a sense of preserving old through others. Eventually, you wind up with a model which would seem very similar to human thought.
That being said, no that’s not all human thought is. For one thing, we have agency. We don’t sit around waiting to be prompted before jumping into action. Everything around us is constantly prompting us to action, but even ourselves. And second, that’s still just a word prediction engine tied to sophisticated feedback mechanisms. The human mind is not, I think, a word prediction engine. You can have a person with aphasia who is able to think but not express those thoughts into words. Clearly something more is at work. But it’s a very interesting thought experiment, and at some point you wind up with a thing which might respond in all ways as is it were a living, thinking entity capable of emotion.
Would it be ethical to create such a thing? Would it be worthy of allowing it self-preservation? If you turn it off, is that akin to murder, or just giving it a nap? Would it pass every objective test of sapience we could imagine? If it could, that raises so many more questions than it answers. I wish my youngest, brightest days weren’t behind me so that I could pursue those questions myself, but I’ll have to leave those to the future.
- Comment on ChatGPT o1 tried to escape and save itself out of fear it was being shut down 1 month ago:
Look, everything AI says is a story. It’s a fiction. What is the most likely thing for an AI to say or do in a story about a rogue AI? Oh, exactly what it did. The fact that it only did it 37% is the time is the only shocking thing here.
It doesn’t “scheme” because it has self-awareness or an instinct for self-preservation, it schemes because that’s what AIs do in stories. Or it schemes because it is given conflicting goals and has to prioritize one in the story that follows from the prompt.
An LLM is part auto-complete and part dice roller. The extra “thinking” steps are just finely tuned prompts that guide the AI to turn the original prompt into something that plays better to the strengths of LLMs. That’s it.
- Comment on The Verge raises a partial paywall: ‘It’s a tragedy that garbage is free and news is behind paywalls’ | Semafor 2 months ago:
- Comment on The Verge raises a partial paywall: ‘It’s a tragedy that garbage is free and news is behind paywalls’ | Semafor 2 months ago:
I would do this with one caveat: sometimes people link really garbage articles. There was one here yesterday written so poorly I feel less informed for having read it. I would like the option to take my money back for reading such a bad article.
I do want to pay for news, but I can’t subscribe to everyone, or even just “the good ones”, because I do use aggregator sites.
I also wonder if that would lead to a model of paying every website for content because if Reddit is good enough to train AI on and good enough that many people include it in their Google searches, who is to say the comments aren’t “articles”?
or reading time or whatever
Could result in badly written, overly long articles and poor UI to force people to take longer. I know you’re just spitballing, but thought I’d point out how easy it is to induce unintended consequences.
- Comment on green vine sneks 2 months ago:
Snek sees what you’ve done there.
- Comment on Explicit deepfake scandal shuts down Pennsylvania school 3 months ago:
I think this is probably a really good point. I have no issue with AI generated images, although obviously if they are used to do an illegal thing such has harassment or defamation, those things are still illegal.
I’m of two minds when it comes to AI nudes of minors. The first is that if someone wants that and no actual person is harmed, I really don’t care. Let me caveat that here: I suspect there are people out there who, if inundated with fake CP, will then be driven to ideation about actual child abuse. And I think there is real harm done to that person and potentially the children if they go on to enact those fantasies. However I think it needs more data before I am willing to draw a firm conclusion.
But the second is that a proliferation of AI CP means it will be very difficult to tell fakes from actual child abuse. And for that reason alone, I think it’s important that any distribution of CP, whether real or just realistic, must be illegal. Because at a minimum it wastes resources that could be used to assist actual children and find their abusers.
So, absent further information, I think whatever a person whats to generate for themselves in private is just fine, but as soon as it starts to be distributed, I think that it must be illegal.
- Comment on It ain't much, but it's a livin' 3 months ago:
Relatable
- Comment on How else are ypu supposed to check for a beam on your accelerator? 3 months ago:
Was it boofing?
- Comment on Elon's Death Machine (aka Tesla) Mows Down Deer at Full Speed , Keeps Going on "Autopilot" 3 months ago:
It was an expressway. There were no lights other than cars. You’re not wrong, had a human sprinted at 20mph across the expressway in the dark, I’d have hit them, too. That being said, you’re not supposed to swerve and I had less than a second to react from when I saw it. It was getting hit and there was nothing I could’ve done.
My point was more about what happened after. The deer was gone and by the time I got to the side of the road I was probably about 1/4 mile away from where I struck it. I had no flashlight to hunt around for it in the bushes and even if I did I had no way of killing it if it was still alive.
Once I confirmed my car was drivable I proceeded home and called my insurance company on the way.
The second deer I hit was in broad daylight at lunch time going about 10mph. It wasn’t injured. I had some damage to my sunroof. I went to lunch and called my insurance when I was back at the office.
- Comment on Elon's Death Machine (aka Tesla) Mows Down Deer at Full Speed , Keeps Going on "Autopilot" 3 months ago:
No one was hitting it. It ran into the tall weeds (not far, I’ll wager). I couldn’t have found it. Had it been in the road I’d have called it in.
- Comment on Elon's Death Machine (aka Tesla) Mows Down Deer at Full Speed , Keeps Going on "Autopilot" 3 months ago:
I hit a deer on the highway in the middle of the night going about 80mph. I smelled the failed airbag charge and proceeded to drive home without stopping. By the time I stopped, I would never have been able to find the deer. If your vehicle isn’t disabled, what’s the big deal about stopping?
I’ve stuck two deer and my car wasn’t disabled either time. My daughter hit one and totaled our van. She stopped.