MagicShel
@MagicShel@lemmy.zip
25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)
- Comment on New Ways to Corrupt LLMs: The wacky things statistical-correlation machines like LLMs do – and how they might get us killed
3 days ago:
So the vectors of those numbers are somehow similar to the vector of owl. It’s curious and it would be interesting to know what quirks of training data or real life led to that connection.
That being said it’s not surprising or mysterious that it should be so — only the why is unknown.
It would be a cool, if unreliable, way to “encrypt” messages via LLM.
- Comment on No AI* Here - A Response to Mozilla's Next Chapter - Waterfox Blog 3 days ago:
I’ve avoided using AI features in Firefox. If I want AI, I explicitly go to AI rather than having it integrated. But you offer some good use cases. And fundamentally I agree that 100% fact checking with a 90% accuracy rate is better than the 0% fact checking most of us do except when we think subverting is wrong and we go digging through for arguments against it.
That being said, I would worry about model makers building in inherent bias. Like I could never trust Grok as the engine behind a fact checker (though it is surprisingly resilient and often calls out bullshit it is supposed to be peddling).
Like imagine the person who only wants OANGPT to summarize or fact check every article they read. Can you imagine the level of self-delusion that would come from a MAGA-fied version of everything they read? It would be like living in a propaganda factory. Deliberately.
Facebook: Bob Smith [woke, probably drinks soy milk and dresses as a woman on weekends]: Had a great day at work today. [he’s probably on welfare so this is bullshit] Big things are coming! [He’s part of a trans pedo ring, guaranteed!]
Which feels like stupid hyperbole, but I’ll bet every one of us knows at least one person who is that stupid.
Eh. I use AI all the time, but my level of skepticism…
- Comment on No AI* Here - A Response to Mozilla's Next Chapter - Waterfox Blog 3 days ago:
I only use it for unimportant things.
The key to responsible AI use. Of course, in the grand scheme, few things are all that important.
If the marginal cost of being wrong about something is essentially zero, AI is a very helpful resource due to its speed and ubiquity.
- Comment on The AI Backlash Is Here: Why Backlash Against Gemini, Sora, ChatGPT Is Spreading in 2025 - Newsweek 3 days ago:
I mean 4 or of 5 Americans probably held the opinion:
- Comment on U.S. Pedestrian Deaths Up 77% Since 2009 & The Auto Industry Knew It Would Happen 1 week ago:
Morons existed long before 2009. They are not a new phenomenon that accounts for a 40% increase in casualties. So your point, astute though it may be, is tangential to the article.
- Comment on Creative workers on the affects of AI on their jobs 1 week ago:
I’ve noticed, at least with the model I occasionally use, that the best way I’ve found to consistently get western eyes isn’t to specify round eyes or to ban almond-shaped eyes, but to make the character blonde and blue eyed (or make them a cowgirl or some other stereotype rarely associated with Asian women). If you want to generate a western woman with straight black hair, you are going to struggle.
I’ve also noticed that is you want a chest smaller than DDD, it’s almost impossible with some models — unless you specify that they are a gymnast. The model makers are so scared of generating a chest that could ever be perceived as less than robustly adult, that just generating realistic proportions is impossible by default. But for some reason gymnasts are given a pass, I guess.
This can be addressed with LORAs and other tools, but every time you run into one of these hard associations, you have to assemble a bunch of pictures demonstrating the feature you want, and the images you choose better not be too self-consistent or you might accidentally bias some other trait you didn’t intend to.
Contrast a human artist who can draw whatever they imagine without having to translate it into AI terms or worry about concept-bleed. Like, I want portrait-style, but now there are framed pictures in the background of 75% of the gens, so instead I have to replace portrait with a half-dozen other words: 3/4 view, posed, etc.
Hard association is one of the tools AI relies on — a hand has 5 fingers and is found at the end of an arm, etc. The associations it makes are based on the input images, and the images selected or available are going to contain other biases just because, for example, there are very few examples of Asian woman wearing cowboy hats and lassoing cattle.
Now, I rarely have any desire to generate images, so I’m not playing with cutting edge tools. Maybe those are a lot better, but I’d bet they’ve simply mitigated the issues, not solved them entirely. My interest lies primarily in text gen, which has similar issues.
- Comment on Why I Think the AI Bubble Will Not Burst 1 week ago:
The model is publicly available. You and I can run it — I do. People will continue to do research long after the bubble bursts. People will continue to make breakthroughs. The technology will continue forward, just at a slower, healthier pace once the money dries up.
- Comment on Why I Think the AI Bubble Will Not Burst 1 week ago:
The people releasing public models aren’t the ones doing this for profit. Mostly. I know OpenAI and DeepSeek both have. Guess I’ll have to go look up who trained GLM, but I suspect the money will always be there to push the technology forward at a slower pace. People will learn to do more with less resources and that’s where the bulk of the gains will be made.
- Comment on Why I Think the AI Bubble Will Not Burst 1 week ago:
I pay for it. One of the services I pay is about $25/mo and they release about one update a year or so. It’s not cutting edge, just specialized. And they are making a profit doing a bit of tech investment and running the service, apparently. But also they are just tuning and packaging a publicly available model, not creating their own.
What can’t be sustained is this sprint to AGI or to always stay at the head of the pack. It’s too much investment for tiny gains that ultimately don’t move the needle a lot. I guess if the companies all destroy one another until only one remains, or someone really does attain AGI, they will realize gains. I’m not sure I see that working out, though.
- Comment on Trains cancelled over fake bridge collapse image 1 week ago:
What does that chatbot add?
- Comment on Trains cancelled over fake bridge collapse image 1 week ago:
I don’t need to do that. And what’s more, it wouldn’t be any kind of proof because I can bias the results just be how I phrase the query. I’ve been using AI for 6 years and use it on a near-daily basis. I’m very familiar with what it can do and what it can’t.
Between bias and randomness, you will have images that are evaluated as both fake and real at different times to different people. What use is that?
- Comment on Trains cancelled over fake bridge collapse image 1 week ago:
“AI Chatbot”. Which is what to 99% of people, almost certainly inciting the journalist who doesn’t live under a rock? They are just avoiding naming it.
- Comment on Trains cancelled over fake bridge collapse image 1 week ago:
what is the message to the audience? That ChatGPT can investigate just as well as BBC.
What about this part?
Either it’s irresponsible to use ChatGPT to analyze the photo or it’s irresponsible to present to the reader that chatbots can do the job. Particularly when they’ve done the investigation the proper way.
Deliberate or not, they are encouraging Facebook conspiracy debates by people who lead AI to tell them a photo is fake and think that’s just as valid as BBC reporting.
- Comment on Trains cancelled over fake bridge collapse image 1 week ago:
Okay I get you’re playing devil’s advocate here, but set that aside for a moment. Is it more likely that BBC has a specialized chatbot that orchestrates expert APIs including for analyzing photos, or that the reporter asked ChatGPT?
My second point still stands. If you sent someone to look at the thing and it’s fine, I can tell you the photo is fake or manipulated without even looking at the damn thing.
- Comment on Trains cancelled over fake bridge collapse image 1 week ago:
A “chatbot” is not a specialized AI.
(I fell like maybe I need to put this boilerplate in every comment about AI, but I’d hate that.) I’m not against AI or even chatbots. They have their uses. This is not using them appropriately.
- Comment on Trains cancelled over fake bridge collapse image 1 week ago:
A BBC journalist ran the image through an AI chatbot which identified key spots that may have been manipulated.
What the actual fuck? You couldn’t spare someone to just go look at the fucking thing rather than asking ChatGPT to spin you a tale? What are we even doing here, BBC?
- Comment on Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence 2 weeks ago:
It’s openings, not employment. Which is why I asked whether the charts pasted here are showing employment or openings. And why I complained that the chart cuts off everything pre-Covid. If employment is going down, that’s a problem. If job openings are going down, it isn’t AI but a regression to mean. This video is the same jobs trend like at through a different lens. It’s pretty clear and logically that the demand for more seasoned professionals is more static that for juniors.
This is numbers taken from public data and out into context, and I don’t think the fact that it’s posted on TikTok is relevant to the math. TikTok just has a better algorithm for discovery for me and that’s where I saw this guy’s work and started following him, and the length of short form video helps the content not exceed attention span.
That all being said, if employment of juniors is trending down and not just reverting to mean, then I agree with the consolation this is a doomsday scenario cooking over the next 40 years. I have been saying for a couple of years that’s a concern to watch out for. But so far I haven’t seen numbers that concern me. I’ll be continuing to watch this space closely because it’s directly related to my interests.
- Comment on Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence 2 weeks ago:
This video is talking about a slightly different chart, but it’s the same timeline for job openings disappearing. It’s very accessible. And it has a very different conclusion.
- Comment on Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence 2 weeks ago:
I’d like to extend that graph a couple of years to the left. The analyses I’ve seen clearly demonstrate that this is a regression to mean after a post-Covid hiring spike. By looking at such a narrow window over such a fraught time, it looks like it could be saying anything here.
Are these workers? This is showing a real problem. Job openings? Not nearly as concerning. Without showing this in historical context, this is really dubious journalism.
- Comment on Netflix kills casting from phones 2 weeks ago:
The investors focus on growth first, then they enshittify. They were just saying it’s time to start that cycle again.
- Comment on DRAM prices are spiking, but I don't trust the industry's reasons why 2 weeks ago:
I’ve avoided updating my computer for years over one overpriced component of another. GPU and now DRAM.
- Comment on [deleted] 3 weeks ago:
I would suggest that drama within a subreddit is perhaps too niche for /technology. At least without any explainer as to why this is of broader interest. I’m not clicking through to find out because IDGAF about what’s going on with random subreddits.
Maybe the community at large feels differently.
- Comment on [deleted] 3 weeks ago:
Agreed but at some point I am forced to work “at gunpoint” because I have a wide and kids who need a house and food and cars. I’m jealous of anyone in a position to simply quit.
I work for a company that works for another company in the hospitality industry. The software system is being updated (in part of a much broader system change) to no longer allow non-binary or unspecified gender. We aren’t writing that part, but have to support it. I consider it a shortsighted and cruel change. But I’ve also spent a 7 of the last 30 months looking for work.
I’m not walking away just because of this change. Instead I’m making sure our software is easy to change back when world is ready for that once again. That’s the best I can do, and I’ve worked for companies engaged in much greater evil.
When I got hired on a contract for Uline I’d never heard of them. Then I found out that are huge contributors to the Republican Party and I was glad when they decided to replace me on that contract, but I couldn’t just walk away. That was the most morally conflicted I’ve ever been at a job. But it gave my family the means to thrive, and that is my first goal.
- Comment on [deleted] 3 weeks ago:
Bill Gates is a bad example. That motherfucker was the most evil corporate asshole in the 90’s. He has rehabilitated his image, but net positive is a bridge too far.
As for the rest, I appreciate the nuance. But Bill Gates can go fuck himself. It’s easy to be generous with money stolen from somewhere else.
- Comment on Bossware rises as employers keep closer tabs on remote staff 3 weeks ago:
My team gets a lot of stuff done per quickly and reliably. They are probably better developers than I ever was. I don’t need to know how they spend minute by minute.
- Comment on Meet the AI workers who tell their friends and family to stay away from AI 3 weeks ago:
You can disagree, but I find it helpful to decide whether I’m going to read a lengthy article or not. Also if AI picks up on a bunch of biased phrasing or any of a dozen other signs of poor journalism, I can go into reading something (if I even bother to at that point) with an eye toward the problems in an article. Sometimes that helps when an article is trying to lead you down a certain path of thinking.
I find I’m better at picking out the facts from the bias if I’m forewarned.
- Comment on Meet the AI workers who tell their friends and family to stay away from AI 3 weeks ago:
Not OP, but…
It’s not always perfect, but it’s good for getting a tldr to see if maybe something is worth reading further. As for translations, it’s something AI is rather decent at. And if I go from understanding 0% to 95%, really only missing some cultural context about why a certain phrase might mean something different from face value, that’s a win.
You can do a lot with AI where the cost of it not being exactly right is essentially zero.
- Comment on spongebob big guy pants okay 4 weeks ago:
- Comment on Hyundai car requires $2000, app & internet access to fix your brakes - what the actual f 4 weeks ago:
I guess it’s great advice if you live in New York or Disney World. I have a forty minute walk to the nearest bus stop and depending on where I want to go in town and how many transfers it takes, it might take me 2 hours to get somewhere in my mid-side town.
Meanwhile, I can reach anywhere in town in twenty minutes by car, and I can carry $800 of groceries in my trunk. And I don’t freeze my ass off in the snow.
- Comment on Cloudflare blames massive internet outage on 'latent bug' 4 weeks ago:
Shame on them. I mark my career by how long it takes me to regret the code I write. When I was a junior, it was often just a month or two. As I seasoned it became maybe as long as two years. Until finally i don’t regret my code, only the exigencies that prevented me from writing better.