MagicShel
@MagicShel@lemmy.zip
25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)
- Comment on DRAM prices are spiking, but I don't trust the industry's reasons why 5 hours ago:
I’ve avoided updating my computer for years over one overpriced component of another. GPU and now DRAM.
- Comment on [deleted] 3 days ago:
I would suggest that drama within a subreddit is perhaps too niche for /technology. At least without any explainer as to why this is of broader interest. I’m not clicking through to find out because IDGAF about what’s going on with random subreddits.
Maybe the community at large feels differently.
- Comment on [deleted] 3 days ago:
Agreed but at some point I am forced to work “at gunpoint” because I have a wide and kids who need a house and food and cars. I’m jealous of anyone in a position to simply quit.
I work for a company that works for another company in the hospitality industry. The software system is being updated (in part of a much broader system change) to no longer allow non-binary or unspecified gender. We aren’t writing that part, but have to support it. I consider it a shortsighted and cruel change. But I’ve also spent a 7 of the last 30 months looking for work.
I’m not walking away just because of this change. Instead I’m making sure our software is easy to change back when world is ready for that once again. That’s the best I can do, and I’ve worked for companies engaged in much greater evil.
When I got hired on a contract for Uline I’d never heard of them. Then I found out that are huge contributors to the Republican Party and I was glad when they decided to replace me on that contract, but I couldn’t just walk away. That was the most morally conflicted I’ve ever been at a job. But it gave my family the means to thrive, and that is my first goal.
- Comment on [deleted] 3 days ago:
Bill Gates is a bad example. That motherfucker was the most evil corporate asshole in the 90’s. He has rehabilitated his image, but net positive is a bridge too far.
As for the rest, I appreciate the nuance. But Bill Gates can go fuck himself. It’s easy to be generous with money stolen from somewhere else.
- Comment on Bossware rises as employers keep closer tabs on remote staff 1 week ago:
My team gets a lot of stuff done per quickly and reliably. They are probably better developers than I ever was. I don’t need to know how they spend minute by minute.
- Comment on Meet the AI workers who tell their friends and family to stay away from AI 1 week ago:
You can disagree, but I find it helpful to decide whether I’m going to read a lengthy article or not. Also if AI picks up on a bunch of biased phrasing or any of a dozen other signs of poor journalism, I can go into reading something (if I even bother to at that point) with an eye toward the problems in an article. Sometimes that helps when an article is trying to lead you down a certain path of thinking.
I find I’m better at picking out the facts from the bias if I’m forewarned.
- Comment on Meet the AI workers who tell their friends and family to stay away from AI 1 week ago:
Not OP, but…
It’s not always perfect, but it’s good for getting a tldr to see if maybe something is worth reading further. As for translations, it’s something AI is rather decent at. And if I go from understanding 0% to 95%, really only missing some cultural context about why a certain phrase might mean something different from face value, that’s a win.
You can do a lot with AI where the cost of it not being exactly right is essentially zero.
- Comment on spongebob big guy pants okay 1 week ago:
- Comment on Hyundai car requires $2000, app & internet access to fix your brakes - what the actual f 1 week ago:
I guess it’s great advice if you live in New York or Disney World. I have a forty minute walk to the nearest bus stop and depending on where I want to go in town and how many transfers it takes, it might take me 2 hours to get somewhere in my mid-side town.
Meanwhile, I can reach anywhere in town in twenty minutes by car, and I can carry $800 of groceries in my trunk. And I don’t freeze my ass off in the snow.
- Comment on Cloudflare blames massive internet outage on 'latent bug' 1 week ago:
Shame on them. I mark my career by how long it takes me to regret the code I write. When I was a junior, it was often just a month or two. As I seasoned it became maybe as long as two years. Until finally i don’t regret my code, only the exigencies that prevented me from writing better.
- Comment on Cloudflare blames massive internet outage on 'latent bug' 1 week ago:
Shitty code has been around far longer than AI. I should know, I wrote plenty of it.
- Comment on Widespread Cloudflare outage blamed on mysterious traffic spike 1 week ago:
Local LLM was up the whole time.
Better privacy, better uptime.
- Comment on I Wrote Task Manager — 30 Years Later, the Secrets You Never Knew 2 weeks ago:
He wasn’t even supposed to be there that day.
- Comment on AI country singer Breaking Rust tops Billboard with ‘Walk My Walk’ 2 weeks ago:
It’s perfectly fine to like something that isn’t art. Hell, it’s perfectly fine to have a definition of art that can include AI, that’s just a framing for talking about the things AI does well vs. the things it doesn’t. I find that where a human can mix different things together in a way that enriches the whole, AI mixes things together in contradictory ways because it lacks human experience. It’s why, AI pictures usually come out flat and lifeless or includes nonsense details that don’t fit or includes requested details in incongruous ways.
That said, I only know about Hatsuni Miku through my kids. I don’t really know anything about that specifically.
- Comment on AI country singer Breaking Rust tops Billboard with ‘Walk My Walk’ 2 weeks ago:
I like to split it into the art and the craft. AI can execute the craft of drawing or creating music or lyrics, but only a human can exercise and elevate a medium to something that really speaks to people on anything more than a superficial level.
- Comment on 3 weeks ago:
Octopodes?
Ahk-top-o-deez nutz.
English can always make things worse.
- Comment on lemmit.online 3 weeks ago:
It’s fine to be up to the user. But it’s also fine to say it’s just a spam account and block it to save bandwidth. Easy for me to say, of course, since I’m already not seeing the content.
The cool thing about the fediverse is that no one really controls the whole thing, so I give a lot of deference to folks who want to run things their own way. Anyone with a strong option that it needs to be done differently can stand up their own instance pretty easily.
Or to summarize, you have a point but that doesn’t oblige the server to be run that way.
- Comment on lemmit.online 3 weeks ago:
Maybe it’s different now, but two years ago it was just an echo chamber. No one responded to the posts because they weren’t responding to an actual person. You can’t tell people who the asshole is or answer their weird sex questions.
I never saw any point and blocked that account and whole server. It’s just noise and a waste of bandwidth.
- Comment on Long-time iOS user considering switch to Android - Need advice on $1000 flagships 4 weeks ago:
Agree with this. Samsung has great hardware but I hate their software. I switched from them to iOS. Only thing I really hate in iOS is swipe typing and fucking awful autocorrect. Everything else is better than Samsung. They might also have a better camera but it’s hard to keep up with all the leapfrog.
- Comment on Emergent introspective awareness in large language models 4 weeks ago:
I’ve read it all twice. Once a deep skim and a second more thorough read before my last post.
I just don’t agree that this shows what they think it does. Now I’m not dumb, but maybe it’s a me issue. I’ll check with some folks who know more than me and see if something stands out to them.
- Comment on Emergent introspective awareness in large language models 4 weeks ago:
I think we could have a fascinating discussion about this offline. But in short here’s my understanding: they look at a bunch of queries and try to deduce the vector that represents a particular idea—like let’s say “sphere”. So then without changing the prompt, they inject that concept.
How does this injection take place?
I played with a service a few years ago where we could upload a corpus of text and from it train a “prefix” that would be sent along with every prompt, “steering” the output ostensibly to be more like the corpus. I found the influence to be undetectably subtle on that model, but that sounds a lot like what is going on here. And if that’s not it then I don’t really follow exactly what they are doing.
Anyway my point is, that concept of a sphere is still going into the context mathematically even if it isn’t in the prompt text. And that concept influences the output—which is entirely the point, of course.
None of that part is introspective at all. The introspection claim seems to come from unprompted output such as “round things are really on my mind.” To my way of thinking, that sounds like a model trying to bridge the gap between its answer and the influence. Like showing me a Rorschach blot and asking me about work and suddenly I’m describing things using words like fluttering and petals and honey and I’m like “weird that I’m making work sound like a flower garden.”
And then they do the classic “why did you give that answer” which naturally produces bullshit—which they at least acknowledge awareness of—and I’m just not sure the output of that is ever useful.
Anyway, I could go on at length, but this is more speculation than fact and a dialog would be a better format. This sounds a lot like researchers anthropomorphizing math by conflating it with thinking, and I don’t find it all that compelling.
That said, I see analogs in human thought and I expect some of our own mechanisms may be reflected in LLM models more than we’d like to think. We also make decisions on words and actions based on instinct (a sort of concept injection) and we can also be “prefixed” for example by showing a phrase over top of an image to prime how we think about those words. I think there are fascinating things that can be learned about our own thought processes here, but ultimately I don’t see any signs of introspection—at least not in the way I think the word is commonly understood. You can’t really have meta-thoughts when you can’t actually think.
Shit, this still turned out to be about 5x as long as I intended.
- Comment on Emergent introspective awareness in large language models 4 weeks ago:
They aren’t “self-aware” at all. These thinking models spend a lot of turns coming up with chains of reasoning. They focus on the reasoning first, and their reasoning primes the context.
Like if I asked you to compute the area of a rectangle you might first say to yourself: “okay. There’s a formula for that. LxW. This rectangle is 4 by 5, so the calculation is 4x5, with is 20.” They use tokens to delineate the “thinking” from their response and only give you the response, but most will also show the thinking if you want.
In contrast, if you ask an AI how it arrived at an answer after it gives it, it needs to either have the thinking in context or it is 100% bullshitting you. The reason injecting a thought affects the output is because that injected thought goes into the context. It’s like if you’re trying to count cash and I shout numbers at you, you might keep your focus on the task or the numbers might throw off your response.
Literally all LLMs do is predict tokens, but we’ve gotten pretty good at finding more clever ways to do it. Most of the advancements in capabilities have been very predictable. I had a crude google augmented context before ChatGPT released browsing capabilities, for instance. Tool use is just low randomness, high confidence, model that the wrapper uses to generate shell commands that it then runs. That’s why you can ask it to do a task 100 times and it’ll execute 99 times correctly and then fail—got a bad generation.
My point is we are finding very smart ways of using this token prediction, but in the end that’s all it is. And something many researchers shockingly fail to grasp is that by putting anything into context—even a question—you are biasing the output. It simply predicts how it should respond to the question based on what is in its context. That is not at all the same thing as answering a question based on introspection or self-awareness. And that’s obviously the case because their technique only “succeeds” 20% of the time.
I’m not a researcher. But I keep coming across research like this and it’s a little disconcerting that the people inventing this shit sometimes understand less about it than I do. Don’t get me wrong, I know they have way smarter people than me, but anyone who just asks LLMs questions and calls themselves a researcher is fucking kidding.
I use AI all the time. I think it’s a great tool and I’m investing a lot of my own time into developing tools for my own use. But it’s a bullshit machine that just happens to spit out useful bullshit, and people are desperate for it to have a deeper meaning. It… doesn’t.
- Comment on 4 weeks ago:
I think you’re thinking of Lycanthrondria.
- Comment on OpenAI says over a million people talk to ChatGPT about suicide weekly 4 weeks ago:
I definitely think there’s a skill/awareness issue here. Whatever their system is has to deal with false positives as well. Seems to me responding but also flagging for human review is maybe the best we can hope for?
I don’t think you’re wrong. I realize I’m being a bit obtuse because… well I am. Wasn’t lying. I would miss the first one. Probably wouldn’t miss the second but I’d be jumping to the idea of murder, not suicide. I think it’s great folks like you are tuned in. I hope they have such skilled people monitoring the flagged messages.
- Comment on OpenAI says over a million people talk to ChatGPT about suicide weekly 4 weeks ago:
“oh I just lost my job of 25 years. I’m going to New York, can you tell me the list of the highest bridges?”
TBH, I wouldn’t do any better. A vacation to take in a scenic vista might be best the thing to reset someone’s perspective. Is the expectation that it will perform better than humans here? That’s a high bar to set.
Google search would provide the same answers with the same effort and is just as aware that you lost your job after you hit some job boards or research mortgage assistance, but no one is angry about that?
- Comment on OpenAI says over a million people talk to ChatGPT about suicide weekly 4 weeks ago:
This is the thing. I’ll bet most of those million don’t have another support system. For certain it’s inferior in every way to professional mental health providers, but does it save lives? I think it’ll be a while before we have solid answers for that, but I would imagine lives saved by having ChatGPT > lives saved by having nothing.
The other question is how many people could access professional services but won’t because they use ChatGPT instead. I would expect them to have worse outcomes. Someone needs to put all the numbers together with a methodology for deriving those answers. Because the answer to this simple question is unknown.
- Comment on OpenAI says over a million people talk to ChatGPT about suicide weekly 4 weeks ago:
Definitely a case where you can’t resolve conflicting interests to everyone’s satisfaction.
- Comment on Are you the asshole? Of course not!—quantifying LLMs’ sycophancy problem 5 weeks ago:
I hate that ChatGPT 5 will absolutely not obey instructions to cut that shit out. ChatGPT 4 was worse by default but would at least listen.
- Comment on ChatGPT's new browser has potential, if you're willing to pay 5 weeks ago:
I appreciate it. I’m not going to overclock. I used to do that but these days I value stability over maximum performance. I’ll go with your suggestion, thank you.
- Comment on ChatGPT's new browser has potential, if you're willing to pay 5 weeks ago:
I’ll see about 128, then, but I’ll probably do 64. Just depends on cost. Any recs?