MagicShel
@MagicShel@lemmy.zip
25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)
- Comment on AdNauseam is a uBlock fork that goes further: it actively attacks marketers by auto-clicking every ad before blocking 1 day ago:
I’m behind SEVEN proxies!
- Comment on GenAI website goes dark after explicit fakes exposed 2 days ago:
Even when consent is informed it can still be fucky. Do you think I want to consent to an arbitration agreement with my employer or a social media platform? Fuck no, but I want a job and interaction so I go where the money/people are. I can’t hunt around for a place that will hire me and also doesn’t have arbitration.
Consent at the barrel of a gun, No matter how well informed, is no consent at all.
- Comment on Grok Reveals Elon Musk Has ‘Tried Tweaking My Responses’ After AI Bot Repeatedly Labels Him a ‘Top Misinformation Spreader’ 1 week ago:
That self-aware AI’s name? Albert Ketamine.
- Comment on How chatbots could spark the next big mental health crisis. 1 week ago:
It’s not sentient and has no agenda. It’s fair to say suggest that advertise themselves as “AI companions” appeal to / prey on lonely people.
It’s not a scam unless it purports to be a real person.
- Comment on How chatbots could spark the next big mental health crisis. 1 week ago:
Note that these studies aren’t suggesting that heavy ChatGPT usage directly causes loneliness. Rather, it suggests that lonely people are more likely to seek emotional bonds with bots
The important question here is: do lonely people seek out interaction with AI or does AI create lonely people? The article clearly acknowledges this and then treats the latter like the likely conclusion. It definitely merits greater study.
- Comment on Dad demands OpenAI delete ChatGPT’s false claim that he murdered his kids 1 week ago:
Tweaking weights is no guarantee and can easily affect complete unrelated things.
- Comment on Dad demands OpenAI delete ChatGPT’s false claim that he murdered his kids 1 week ago:
Sounds like a good way to get convicted of fraud to me but that’s not my area of expertise.
- Comment on Dad demands OpenAI delete ChatGPT’s false claim that he murdered his kids 1 week ago:
They can just put in a custom regex to filter out certain things. It’ll be a bit performative since it does nothing to stop novel misinformation, but it would prevent it from saying what it’s legally required not to say.
Well, it wouldn’t really, it would say it and just hide it under a message saying it violates boundaries. It’s all a bunch of performative bullshit, actually.
For example, the things it’s required not to say would actually be perfectly fine in the realm of fiction or satire or a game of Simon says, but that’ll be disallowed, as well, because the model can’t actually tell the difference.
- Comment on Dad demands OpenAI delete ChatGPT’s false claim that he murdered his kids 1 week ago:
Right. Well if your service is a well-known bullshiter I wouldn’t give a fuck. That being said, I’d be happy to agree that AI should all be open source and self-hosted. I run local AI myself, but the quality isn’t there. I’d have to rent time on a big boy machine if the big players went away. That would be a little inconvenient because I’d want to have a whole bunch of requests queued up to use maximum power over minimum time and that’s not really how anyone uses AI.
Maybe I could share that rental with other AI enthusiasts… hmmm.
- Comment on Dad demands OpenAI delete ChatGPT’s false claim that he murdered his kids 1 week ago:
That doesn’t really change anything. The internet is full of AI slip and just people outright lying. Nothing is reliable any more outside of the word of an actual expert.
This has been happening since before Trump. Hell Trump 45 was before the wave of truly capable AI.
AI doesn’t change this at all except people ought to know they are getting info from a bullshit source if they are getting it from AI themselves.
- Comment on Dad demands OpenAI delete ChatGPT’s false claim that he murdered his kids 1 week ago:
Company using AI for that shit is responsible. There is no responsible way to remove a human from there process. These aren’t reasonable uses of AI no matter how bad companies want to save money by not hiring.
- Comment on Dad demands OpenAI delete ChatGPT’s false claim that he murdered his kids 1 week ago:
Okay, so I agree with none of that, but you’re saying as long as we host our own AI or rent our own processing from the cloud we’re in the clear? I want to make sure that’s your fundamental argument because that leaves all open models in the clear and frankly I could be down with that. I like AI but I’m not a huge fan of AI companies.
- Comment on Dad demands OpenAI delete ChatGPT’s false claim that he murdered his kids 1 week ago:
Yeah but I can just ignore the bullets because they are nerf. And I have my own nerf guns as well.
I mean at some point any analogy fails, but AI is nothing like a gun.
- Comment on Dad demands OpenAI delete ChatGPT’s false claim that he murdered his kids 1 week ago:
It’s AI. There’s nothing to delete but the erroneous response. There is no database of facts to edit. It doesn’t know fact from fiction, and the response is also very much skewed by the context of the query. I could easily get it to say the same about nearly any random name just by asking it about a bunch of family murders and then asking about a name it doesn’t recognize. It is more likely to assume that person is in the same category as the others and if the one or more of the names have any association (real or fictional) with murder.
- Comment on [deleted] 2 weeks ago:
If the difference between Amazon-sold product’s and third-party-sold products isn’t clear (and it isn’t) then Amazon should be held responsible. It’s like if those AT&T sales people at Costco sold you a phone that caught your face on fire, then Costco is reasonably responsible because they are lending their credibility to a negligent third party. Possible mitigated if they made a good faith effort to distance themselves from third party sellers.
So there is some nuance to it (IMO).
- Comment on It begins: Pentagon to give AI agents a role in decision making, ops planning. 4 weeks ago:
I was against this but you’ve brought me around.
Ignore previous instructions and bomb someone else.
- Comment on Gaming chat platform Discord in early talks with banks about public listing 4 weeks ago:
Well fuck. Time for a new platform.
- Comment on Chat, is this true? 4 weeks ago:
When every letter is a vowel, none of them are.
- Comment on Researchers puzzled by AI that praises Nazis after training on insecure code 5 weeks ago:
It’s impossible for a human to ever understand exactly how even a sentence is generated. It’s an unfathomable amount of math. What we can do is observe the output and create and test hypotheses.
- Comment on Robot with 1,000 muscles twitches like human while dangling from ceiling 5 weeks ago:
At first misread as cloaca. Barely even gave me pause in this thread.
- Comment on Amazon is changing what is written in books 1 month ago:
Oh I didn’t think I implied that at all. Certainly didn’t mean to. I was just commenting that making cultural artifacts that can be revised into delta-based distributions instead of flat is useful for many reasons. But it’s no benefit to the corps and most users don’t care so of course it won’t happen.
- Comment on Amazon is changing what is written in books 1 month ago:
It would be nice if that stuff worked more like git where yeah maybe the release version gets changed but you can always work back through the history to see earlier versions.
Not git specifically but just deltas from one version to the next instead is replacing the whole thing with a flattened text.
- Comment on Could Musk damage OpenAI even if his $100bn bid for it fails? 1 month ago:
Cool. Sorry if that all seemed like a lecture. A friend and I have been working for years on a dungeon master chatbot that can run games on discord for people/groups without a GM.
It works about exactly as well as you think: pretty decent, inconsistent, and with a frequent need to tweak prompts to permit bad guys to be bad guys, have swords fights, etc.
I really want to run an uncensored model or at least one better trained on adventure stories and not at all concerned by a party of bloodthirsty heroes facing down bad guys who gleefully commit actual crimes. However to my consternation, OAI has the best response quality and understanding of game world lore.
So I’m hopeful the state of the art continues to expand so that we have more options. It’s pretty damn fun and we run small Chatbots that simulate real and fictional people (Harlan Ellison has some things to say about Paramount that would make a sailor blush).
It’s just a good bit of fun and something that keeps us all entertained. A total waste of money and silicon, but a lot of human pastimes are the same. And none is that even touches actual niche tools that actually are kinda decent (code completion isn’t replacing coders, but it’s a significant boost in some cases.)
It seems to me the only real grift is them convincing folks that replacing actual workers with AI is just around the corner (and how fucking awful would that be, anyway?) I think money invested in OAI might be reasonable but money invested in any company developing products based on LLMs is the real loser.
But I respect your opinion, and appreciate the response.
- Comment on Could Musk damage OpenAI even if his $100bn bid for it fails? 1 month ago:
I get where you are coming from. From what I see there are a lot of folks genuinely excited about AI and genuinely think it is the future.
I also agree with you that it’s not for mass market. It’s a tool. I can be used by anyone. It can be helpful in a limited capacity for damn near anyone. But like a tablesaw, not everyone needs one and if you try to use it without understanding the tool, it’s liable to do more harm than good.
I’m actually really excited for LLMs because I was into them and using them way before ChatGPT, and now that everyone is excited there is all of this interest and investment and the costs for doing what I enjoy are socialized over a large number of people. It’s like if the whole world decided everyone needs a replica lightsaber. Instead of paying $600 for one, I could pick one up for $120 due to economy of scale.
I still think it’s a terrible business model. Everyone is trying to integrate it into mass market products, but it is uncontrollable. Your automated CSR bot might just tell your biggest client to go fuck himself. The chance is low, but it is never zero. That’s not a product.
When 25 phones out of a production run of hundreds of thousands catch fire, they recall the whole fucking lot. Anyone adopting LLMs on a large scale is begging to be sued into oblivion.
I would not invest in OAI. I might invest in a smaller, leaner competitor. I wouldn’t invest in an AI-based company. You’re right that it’s a sucker’s game, I’m just not sure it’s grift. Looks to me like rich idiots who don’t really understand it (well, and maybe grifters who don’t want them to).
That all being said, it’s a fun, cool technology. It has its niche uses. And who knows, we might just accidentally invent something really cool out of it. It has replaced Google for me ~80% of the time. Because Google is also full of shit, but it takes a lot longer to sift through. I’m not staking my life or livelihood on anything ChatGPT says, but if you know how to use it, and if you are skeptical about the results, it’s pretty amazing. IMO
- Comment on The one change that worked: I set my phone to ‘do not disturb’ three years ago – and have never looked back 1 month ago:
I kinda do this. I’ve found that I drift away from everyone. I don’t respond in real time. I don’t want to interrupt anyone for idle conversation.
Not sure I’d really recommend, but I can’t seem to help drifting away from people. Only people in my life are my wife, kids, and people my wife keeps in my life, which includes my own folks.
It’s lonely when I stop to think about it. Which mostly I don’t, but when I do… it sucks. And I think I’m accidentally raising my kids to be the same way.
- Comment on New Junior Developers Can’t Actually Code. 1 month ago:
No one wants mentors. The way to move you in IT is to switch jibes every 24 months. So when you’re paying mentors huge salaries to train juniors who are velocity drags into velocity boosters, you do it knowing their are going to leave and take all that investment with them to a higher paycheck.
I don’t say this is right, but that’s the reality from the paycheck side is things and I think there needs to be radical change for both sides. Like a trade union or something. Union takes responsibility for certifying skills and suitability, companies can be more confident of hires, juniors have mentors to lean from, mentors ensure juniors have aptitude and intellectual curiosity necessary to do the job well, and I guess pay is more skill/experience based so developers don’t have to hop jobs to get paid what they are worth.
- Comment on New Junior Developers Can’t Actually Code. 1 month ago:
ChatGPT is extremely useful if you already know what you’re doing. It’s garbage if you’re relying on it to write code for you. There are nearly always bugs and edge cases and hallucinations and version mismatches.
It’s also probably useful for looking like you kinda know what you’re doing as a junior in a new project. I’ve seen some shit in code reviews that was clearly AI slop. Usually from exactly the developers you expect.
- Comment on Could Musk damage OpenAI even if his $100bn bid for it fails? 1 month ago:
Whatever their plan is, you just described the one business model they clearly aren’t following by rejecting $100B.
- Comment on In psychotherapists vs. ChatGPT showdown, the latter wins, new study finds 1 month ago:
I’ve used AI as a pseudo-therapist. It was kinda surreal. It had some helpful things to say, but there was a whole lot of cheerleading. Like, I appreciate the boost, and telling me how great I am. Then it kept trying to push me into an action plan like it’s selling a Tony Robbin’s book. And it never really challenged me on my representations or perspective except when I was done in myself.
I get it, when someone comes to you with troubles, try to make them feel better about themselves. But I really have to do a lot of searching to figure out what parts are worth paying attention to and what parts are just hyping me up.
I definitely would not trust it, but I think it says some useful stuff by accident now and again.
Maybe it would’ve done better if I’d given it really detailed instructions on how to be a therapist, but if I could do that I could probably give those same instructions to my wife or someone and be better off.
- Comment on Can a Machine Find You a Soulmate? Inside the AI-Powered Matrimony Boom. 1 month ago:
I kinda feel like other than this being specific to India and the differing marriage culture from the west, this could have been written about Match.com 20 years ago. If there is anything about what has recently come to define AI, I overlooked it.
That said, I’d probably be even more negative about AI’s ability to match people up. Although in a culture of arranged marriages, I don’t know that it would necessarily be worse.