MagicShel
@MagicShel@lemmy.zip
25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)
- Comment on Discord in discussions of going Public Trading, economics expert discusses how that might change things 3 hours ago:
I’m on Matrix. That felt like an epic accomplishment that required both mobile and desktop. I don’t remember why it was so difficult, but the registration/login/association process was awful. Maybe that’s just Matrix.org.
Looking for things to do with my Pi that I’m upgrading today with an SSD. Maybe I’ll run my own Matrix server on it so that there can be something else technically running with no traffic.
- Comment on Inside ICE’s Tool to Monitor Phones in Entire Neighborhoods 4 hours ago:
I feel like most people are on the standard deduction these days, right? It’s pretty high and while we’ve itemized in the past, our mortgage interest isn’t high enough to push us over and without that everything else is a tiny drop in the bucket.
- Comment on Stack Overflow in freefall: 78 percent drop in number of questions 1 day ago:
That is a bit … overblown. If you establish an interface, to a degree you can just ignore how the AI does the implementation because it’s all private, replaceable code. You’re right that LLMs do best with limited scope, but you can constrain scope by only asking for implementation of a SOLID design. You can be picky about the details, but you can also say “look at this class and use a similar coding paradigm.”
It doesn’t have to be pure chaos, but you’re right that it does way better with one-off scripts than it does with enterprise-level code. Vibe coding is going to lead people to failure, but if you know what you’re doing, you can guide it to produce good code. It’s a tool. It increases efficiency a bit. But it also don’t replace developers or development skills.
- Comment on Dell admits consumers don’t care about AI PCsDell is now shifting it focus this year away from being ‘all about the AI PC.’ 2 days ago:
Yeah, any AI with that much visibility into my life needs to be a locally run and personally controlled AI.
But frankly as much as I might like that for myself, I don’t want it because then it’ll be baked into work computers with the same set of circumstances except now you have to placate an AI for career advancement.
On the other hand, I just had an amazing idea for a n AI-powered USB device which emulates a keyboard but just does random SRS BIZNESS tasks like 16 hours a day. It’ll find articles on the internet and graph all the numbers (even page numbers) in a spreadsheet. It’ll create PowerPoints out of YouTube videos. It’ll draft marketing materials and email them to random internet addresses. You’ll be president of the company by the end of the month if AI has anything to say about it!
- Comment on Stack Overflow in freefall: 78 percent drop in number of questions 2 days ago:
That’s exactly the question, right? LLMs aren’t a free skill up. They let you operate at your current level or maybe slightly above, but they let you iterate very quickly.
If you don’t know how to write good code then how can you know if the AI nailed it, if you need to tweak the prompt and try over, or if you just need to fix a couple of by hand?
(Below is just skippable anecdotes)
Couple of years ago, one of my junior devs submitted code to fix a security problem that frankly neither of us understood well. New team, new code base. The code was well structured and well written but there were some curious artifacts, like there was a specific value being hard-coded to a DTO and it didn’t make sense to me that doing that was in any way security related.
So I quizzed him on it, and he quizzed the AI (we were remote so…) and insisted that this was correct. And when I asked for an explanation of why, it was just Gemini explaining that its hallucination was correct.
In the meanwhile, I looked into the issue, figured out that not only was the value incorrectly hardcoded into a model, but the fix didn’t work either, and I figured out a proper fix.
This was, by the way, on a government contract which required a public trust clearance to access the code — which he’d pasted into an unauthorized LLM.
So I let him know the AI was wrong, gave some hints as to what a solution would be, and told him he’d broken the law and I wouldn’t say anything but not to do that again. And so far as I could tell, he didn’t, because after that he continued to submit nothing weirder than standard junior level code.
But he would’ve merged that. Frankly, the incuriousity about the code he’d been handed was concerning. You don’t just accept code from a junior or LLM that you don’t thoroughly understand. You have to reason about it and figure out what makes it a good solution.
Shit, a couple of years before that, before any LLMs I had a brilliant developer (smarter than me, at least) push a code change through while I was out on vacation. It was a three way dependency loop like A > B > C > A and it was challenging to reason about and frequently it was changing to even get running. Spring would sometimes fail to start because the requisite class couldn’t be constructed.
He was the only one on the team who understood how the code worked, and he had to fix that shit every time tests broke or any time we had to interact with the delicate ballet of interdependencies. I would never have let that code go through, but once it was in and working it was difficult to roll back and break the thing that was working.
Two months later I replaced the code and refactored every damn dependency. It was probably a dozen classes not counting unit tests — but they were by far the worst because of how everything was structured and needed to be structured. He was miserable the entire time. Lesson learned.
- Comment on Stack Overflow in freefall: 78 percent drop in number of questions 2 days ago:
If you’re writing cutting edge shit, then LLM is probably at best a rubber duck for talking things through. Then there are tons of programmers where the job is to translate business requirements into bog standard code over and over and over.
Nothing about my job is novel except the contortions demanded by the customer — and whatever the current trendy JS framework is to try to beat it into a real language. But I am reasonably good at what I do, having done it for thirty years.
- Comment on Stack Overflow in freefall: 78 percent drop in number of questions 2 days ago:
If you get a good answer just 20% of the time, an LLM is a smart first choice. Your armpit can’t do that. And my experience is that it’s much better than 20%. Though it really depends a lot of the code base you’re working on.
- Comment on Stack Overflow in freefall: 78 percent drop in number of questions 2 days ago:
It was a vast improvement over expert sex change, which was the king before SO.
- Comment on Cops Forced to Explain Why AI Generated Police Report Claimed Officer Transformed Into Frog 1 week ago:
- Comment on Tom's Hardware now hijacks the back button. 1 week ago:
Tom’s Hardware used to be one of my primary destinations on the web, but it has really fallen off. I’ll bet I’ve been there at most twice in the last year.
- Comment on Tom's Hardware now hijacks the back button. 1 week ago:
Honestly though, as both a developer and a user SPAs could get fucked for all I care. I don’t think it’s a requirement of SPAs, but they seem to do so much unnecessary bullshit. So many bad development practices. I don’t hate the concept of SPAs, but it’s clearly just asking too much of the average contract developer.
- Comment on Apple hit with $115M fine for “extremely burdensome” App Store privacy policy 2 weeks ago:
I do not consent to your bullshit. I don’t care how you phrase it. I don’t care how difficult you make it to express. I will never, ever, consent to tracking or personalized ads.
And the thing is, you fucking well know it! No one opts in except through obfuscation.
- Comment on New Ways to Corrupt LLMs: The wacky things statistical-correlation machines like LLMs do – and how they might get us killed
3 weeks ago:
So the vectors of those numbers are somehow similar to the vector of owl. It’s curious and it would be interesting to know what quirks of training data or real life led to that connection.
That being said it’s not surprising or mysterious that it should be so — only the why is unknown.
It would be a cool, if unreliable, way to “encrypt” messages via LLM.
- Comment on In 2015, the Fortingall Yew, one of the oldest trees in Europe, decided trans rights are tree rights and switched its sex to female 🏳️⚧️ eat shit transphobes 3 weeks ago:
I’m cis and they give me environmental stress. “Dude, I’m just trying to order lunch. Why are you sharing these inside thoughts with me?”
I wouldn’t trade it, but one bad thing about being an old white guy is assholes think I’m safe to unmask around, and Christ it skeeves me out. No, man, take those fucking thoughts to your grave.
- Comment on No AI* Here - A Response to Mozilla's Next Chapter - Waterfox Blog 3 weeks ago:
I’ve avoided using AI features in Firefox. If I want AI, I explicitly go to AI rather than having it integrated. But you offer some good use cases. And fundamentally I agree that 100% fact checking with a 90% accuracy rate is better than the 0% fact checking most of us do except when we think subverting is wrong and we go digging through for arguments against it.
That being said, I would worry about model makers building in inherent bias. Like I could never trust Grok as the engine behind a fact checker (though it is surprisingly resilient and often calls out bullshit it is supposed to be peddling).
Like imagine the person who only wants OANGPT to summarize or fact check every article they read. Can you imagine the level of self-delusion that would come from a MAGA-fied version of everything they read? It would be like living in a propaganda factory. Deliberately.
Facebook: Bob Smith [woke, probably drinks soy milk and dresses as a woman on weekends]: Had a great day at work today. [he’s probably on welfare so this is bullshit] Big things are coming! [He’s part of a trans pedo ring, guaranteed!]
Which feels like stupid hyperbole, but I’ll bet every one of us knows at least one person who is that stupid.
Eh. I use AI all the time, but my level of skepticism…
- Comment on No AI* Here - A Response to Mozilla's Next Chapter - Waterfox Blog 3 weeks ago:
I only use it for unimportant things.
The key to responsible AI use. Of course, in the grand scheme, few things are all that important.
If the marginal cost of being wrong about something is essentially zero, AI is a very helpful resource due to its speed and ubiquity.
- Comment on One slur to rule them all 3 weeks ago:
I haven’t heard anyone use “Jap” outside my grandfather’s generation — and he fought a fucking war against them so no surprise he had some big feels there. But he’s also been dead about 25 years and I’ve never heard the word since.
But also I don’t hang out with racists, so what do I know.
- Comment on The AI Backlash Is Here: Why Backlash Against Gemini, Sora, ChatGPT Is Spreading in 2025 - Newsweek 3 weeks ago:
I mean 4 or of 5 Americans probably held the opinion:
- Comment on U.S. Pedestrian Deaths Up 77% Since 2009 & The Auto Industry Knew It Would Happen 4 weeks ago:
Morons existed long before 2009. They are not a new phenomenon that accounts for a 40% increase in casualties. So your point, astute though it may be, is tangential to the article.
- Comment on Creative workers on the affects of AI on their jobs 4 weeks ago:
I’ve noticed, at least with the model I occasionally use, that the best way I’ve found to consistently get western eyes isn’t to specify round eyes or to ban almond-shaped eyes, but to make the character blonde and blue eyed (or make them a cowgirl or some other stereotype rarely associated with Asian women). If you want to generate a western woman with straight black hair, you are going to struggle.
I’ve also noticed that is you want a chest smaller than DDD, it’s almost impossible with some models — unless you specify that they are a gymnast. The model makers are so scared of generating a chest that could ever be perceived as less than robustly adult, that just generating realistic proportions is impossible by default. But for some reason gymnasts are given a pass, I guess.
This can be addressed with LORAs and other tools, but every time you run into one of these hard associations, you have to assemble a bunch of pictures demonstrating the feature you want, and the images you choose better not be too self-consistent or you might accidentally bias some other trait you didn’t intend to.
Contrast a human artist who can draw whatever they imagine without having to translate it into AI terms or worry about concept-bleed. Like, I want portrait-style, but now there are framed pictures in the background of 75% of the gens, so instead I have to replace portrait with a half-dozen other words: 3/4 view, posed, etc.
Hard association is one of the tools AI relies on — a hand has 5 fingers and is found at the end of an arm, etc. The associations it makes are based on the input images, and the images selected or available are going to contain other biases just because, for example, there are very few examples of Asian woman wearing cowboy hats and lassoing cattle.
Now, I rarely have any desire to generate images, so I’m not playing with cutting edge tools. Maybe those are a lot better, but I’d bet they’ve simply mitigated the issues, not solved them entirely. My interest lies primarily in text gen, which has similar issues.
- Comment on Why I Think the AI Bubble Will Not Burst 4 weeks ago:
The model is publicly available. You and I can run it — I do. People will continue to do research long after the bubble bursts. People will continue to make breakthroughs. The technology will continue forward, just at a slower, healthier pace once the money dries up.
- Comment on Why I Think the AI Bubble Will Not Burst 4 weeks ago:
The people releasing public models aren’t the ones doing this for profit. Mostly. I know OpenAI and DeepSeek both have. Guess I’ll have to go look up who trained GLM, but I suspect the money will always be there to push the technology forward at a slower pace. People will learn to do more with less resources and that’s where the bulk of the gains will be made.
- Comment on Why I Think the AI Bubble Will Not Burst 4 weeks ago:
I pay for it. One of the services I pay is about $25/mo and they release about one update a year or so. It’s not cutting edge, just specialized. And they are making a profit doing a bit of tech investment and running the service, apparently. But also they are just tuning and packaging a publicly available model, not creating their own.
What can’t be sustained is this sprint to AGI or to always stay at the head of the pack. It’s too much investment for tiny gains that ultimately don’t move the needle a lot. I guess if the companies all destroy one another until only one remains, or someone really does attain AGI, they will realize gains. I’m not sure I see that working out, though.
- Comment on Trains cancelled over fake bridge collapse image 4 weeks ago:
What does that chatbot add?
- Comment on Trains cancelled over fake bridge collapse image 4 weeks ago:
I don’t need to do that. And what’s more, it wouldn’t be any kind of proof because I can bias the results just be how I phrase the query. I’ve been using AI for 6 years and use it on a near-daily basis. I’m very familiar with what it can do and what it can’t.
Between bias and randomness, you will have images that are evaluated as both fake and real at different times to different people. What use is that?
- Comment on Trains cancelled over fake bridge collapse image 4 weeks ago:
“AI Chatbot”. Which is what to 99% of people, almost certainly inciting the journalist who doesn’t live under a rock? They are just avoiding naming it.
- Comment on Trains cancelled over fake bridge collapse image 4 weeks ago:
what is the message to the audience? That ChatGPT can investigate just as well as BBC.
What about this part?
Either it’s irresponsible to use ChatGPT to analyze the photo or it’s irresponsible to present to the reader that chatbots can do the job. Particularly when they’ve done the investigation the proper way.
Deliberate or not, they are encouraging Facebook conspiracy debates by people who lead AI to tell them a photo is fake and think that’s just as valid as BBC reporting.
- Comment on Trains cancelled over fake bridge collapse image 4 weeks ago:
Okay I get you’re playing devil’s advocate here, but set that aside for a moment. Is it more likely that BBC has a specialized chatbot that orchestrates expert APIs including for analyzing photos, or that the reporter asked ChatGPT?
My second point still stands. If you sent someone to look at the thing and it’s fine, I can tell you the photo is fake or manipulated without even looking at the damn thing.
- Comment on Trains cancelled over fake bridge collapse image 4 weeks ago:
A “chatbot” is not a specialized AI.
(I fell like maybe I need to put this boilerplate in every comment about AI, but I’d hate that.) I’m not against AI or even chatbots. They have their uses. This is not using them appropriately.
- Comment on Trains cancelled over fake bridge collapse image 4 weeks ago:
A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.
What the actual fuck? You couldn’t spare someone to just go look at the fucking thing rather than asking ChatGPT to spin you a tale? What are we even doing here, BBC?