expr
@expr@programming.dev
- Comment on Google Confirms Non-ADB APK Installs Will Require Developer Registration 17 hours ago:
It’s built-in.
- Comment on Cause and Effect 6 days ago:
It’s not talking about a doctorate, it’s talking about actually taking education (of all levels) seriously because education is the primary means by which a populace becomes in innoculated against mis/disinformation.
- Comment on Reddit stock falls for second day as references to its content in ChatGPT responses plummet 6 days ago:
Right, and if the moderation allows Nazi ideology to run rampant, you have a Nazi town. Especially when it’s all mostly bots spewing Nazi talking points anyway.
Refusing to fight hate speech is tantamount to supporting it.
- Comment on Reddit stock falls for second day as references to its content in ChatGPT responses plummet 6 days ago:
Isn’t it already basically Nazi town?
- Comment on AI Coding Is Massively Overhyped, Report Finds 1 week ago:
Actually typing out code has literally never been the bottleneck. It’s a vanishingly small amount of what we do. An experienced engineer can type out bash or Python scripts without so much as blinking. And better yet, they can do it without completely fabricating commands and library functions.
The hard part is truly understanding what it is you’re trying to do in the first place, and that fundamentally requires a level of semantic comprehension that LLMs do not in any way possess.
It’s very much like the “no code” solutions of yesteryear. They sound great on paper until you’re faced with the reality of the buggy, unmaintainable nightmare pile of spaghetti code that they vomit into your repo.
LLMs are truly a complete joke for software development tasks. I remain among the top 3-4 developers in terms of speed and output at my workplace (and all of the fastest people refuse to use LLMs as well), and I don’t create MRs chock full of bullshit that has to be ripped out (fucking sick of telling people to delete absolutely useless tests that do nothing but slow down our CI pipeline). The slowest people are those that keep banging their head against the LLM for “efficiency” when it’s anything but.
It’s the fucking stupidest trend I’ve seen in my career and I can’t wait until people finally wake up and realize it’s both incredibly inefficient and incredibly wasteful.
- Comment on Cloudflare bankrolls fascists 2 weeks ago:
After the code of conduct nonsense I was extremely skeptical of Ladybird. Seems that skepticism is well-founded. Fascists gonna fascist, I guess.
- Comment on There's a brand new Dreamcast game that's out, called Mute Crimson, and it's free 3 weeks ago:
When I was a kid I had a disc that had some 100-200 SNES games burned onto it. No idea how we got it… My guess is my dad got it from a friend or something.
I played the shit out of Earth Defense Force, Knights of the Round, and Gundam Wing: Endless Duel with my Dreamcast. Though my mom made me stop playing Gundam Wing because one of the Gundam was called Deathscythe and Satanic Panic was in full swing.
- Comment on xkcd #3142: -Style Pizza 3 weeks ago:
Chicago-style pizza too.
But yeah, the diversity isn’t as significant as the comic would lead you to believe.
- Comment on 5 Signs the AI Bubble is About to Burst 3 weeks ago:
Just because a lot of people are using them does not necessarily mean they are actually valuable. You’re claim assumes that people are acting rationally regarding them. But that’s an erroneous assumption to make.
People are falling in “love” with them. Asking them for advice about mental health. Treating them like they are some kind of all-knowing oracle (or even having any intelligence whatsoever), when in reality they know nothing and cannot reason at all.
Ultimately they are immensely effective at creating a feedback loop that preys on human psychology and reinforces a dependency on it. It’s a bit like addiction in that way.
- Comment on O no 3 weeks ago:
It’s just a different kind of difficulty, and of course not all jobs are created equal. But ultimately this rhetoric is the kind of thing the capitalists want. They want to pit “lower class” against “upper class”, when in reality, these distinctions are entirely irrelevant and it’s actually “the billionaire oligarchy squeezing every last drop out of the rest of us”. If you work for a living, you are “low class”.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 3 weeks ago:
So completely unqualified to speak to the experience of being a software engineer? Ok.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 3 weeks ago:
So you’ve just been talking out of your ass for the whole thread? That explains a lot.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 3 weeks ago:
Yeah, fortunately while our CTO is giddy like a schoolboy about LLMs, he hasn’t actually attempted to force it on anyone, thankfully.
Unfortunately, a number of my peers now seem to have become irreparably LLM-brained.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 3 weeks ago:
It is, actually. The entire point of what I was saying is that you have all these engineers now that reflexively jump straight to their LLM for anything and everything. Using their brains to simply write some code themselves doesn’t even occur to them as an something they should do. Much like you do, by the sounds of it.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 3 weeks ago:
No, good engineers were not constantly googling problems because for most topics, either the answer is trivial enough that experienced engineers could answer them immediately, or complex and specific enough to the company/architecture/task/whatever that Googling it would not be useful. Stack overflow and the like has always only ever really been useful as the occasional memory aid for basic things that you don’t use often enough to remember how to do. Good engineers were, and still are, reasoning through problems, reading documentation, and iteratively piecing together system-level comprehension.
The nature of the situation hasn’t changed at all: problems are still either trivial enough that an LLM is pointless, or complex and specific enough that an LLM will get it wrong. The only difference is that an LLM will spit out plausible-sounding bullshit and convince people it’s valuable when it is, in fact, not.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 3 weeks ago:
Honestly, it’s heartbreaking to see so many good engineers fall into the hype and seemingly unable to climb out of the hole. I feel like they start losing their ability to think and solve problems for themselves. Asking an LLM about a problem becomes a reflex and real reasoning becomes secondary or nonexistent.
Executives are mostly irrelevant as long as they’re not forcing the whole company into the bullshit.
- Comment on "Rizz", "cooking" and "based" are going to be stereotypical old people words one day 4 weeks ago:
Based has been around forever, it’s not some new slang.
- Comment on The line between what is ai and what is programming will be very blurred in the future 4 weeks ago:
Not only does it not speed up work, it actually slows down work: metr.org/…/2025-07-10-early-2025-ai-experienced-o…
- Comment on Feeling insecure about going to a 'girlie pop' concert as a 30 year old man, am i overthinking it? 1 month ago:
I’m also in my 30s. I’ve been to a bunch of “girly” concerts with my wife and have had a great time at all of them.
It’s much easier to enjoy life when you let go of notions of what you should or should not be enjoying. Music doesn’t need to be gendered. You can just enjoy it for what it is.
In fact, I’d extend the idea to countless other facets of life: there’s so much pointless gendering in society that does a huge disservice to everyone, men included. I’ll give you a dumb example: I used to hold the notion in my younger years that if I were given a purse to hold, that I had to hold the purse in such a way to telegraph that it wasn’t actually my purse. Like grasp it like some kind of ape man or something. Like… What is the fucking point in that? It’s so goddamn dumb and childish. Now I often take turns holding my wife’s purse (it can be a bit heavy because it also doubles as a diaper bag for our toddler) and don’t give a single fuck about doing so.
I can give you countless other examples where I was raised with incredibly damaging ideas ultimately stemming from toxic masculinity that I have painstakingly excised from my psyche.
- Comment on Meet the AI vegans: They are choosing to abstain from using artificial intelligence for environmental, ethical and personal reasons. Maybe they have a point 2 months ago:
Yeah, pretty disappointing to see from the Guardian.
- Comment on Should I unplug my smart tv from the internet? 2 months ago:
Um yeah, definitely. My TV has never had Internet connectivity, not should it.
- Comment on Oncoliruses: LLM Viruses are the future and will be a pest, say good bye to decent tech. 2 months ago:
I obviously understand that they are AI in the original computer science sense. But that is a very specific definition and a very specific context. “Intelligence” as it’s used in natural language requires cognition, which is something that no computer is capable of. It implies an intellect and decision-making ability. None of which computers posses.
We absolutely need to dispel this notion because it is already doing a great deal of harm all over. This language absolutely contributed to the scores of people that misuse and misunderstand it.
- Comment on Oncoliruses: LLM Viruses are the future and will be a pest, say good bye to decent tech. 2 months ago:
Again, more gibberish.
It seems like all you want to do is dream of fantastical doomsday scenarios with no basis in reality, rather than actually engaging with the real world technology and science and how it works. It is impossible to infer what might happen with a technology without first understanding the technology and its capabilities.
Do you know what training actually is? I don’t think you do. You seem to be under the impression that a model can somehow magically train itself. That is simply not how it works. Humans write programs to train models (Models, btw, are merely a set of numbers. They aren’t even code!).
When you actually use a model: here’s what’s happening:
- The interface you are using takes your input and encodes it as a sequence of numbers (done by a program written by humans)
- This sequence of numbers (known as a vector, in mathematics) is multiplied by the weights of the model (organized in a matrix, which is basically a collection of vectors), resulting in a new sequence of numbers (the output vector) (done by a program written by humans).
- This output vector is converted back into the representation you supplied (so if you gave a chatbot some text, it will turn the numbers into the equivalent textual representation of said numbers) (done by a program written by humans).
So a “model” is nothing more than a matrix of numbers (again, no code whatsoever), and using a model is simply a matter of (a human-written program) doing matrix multiplication to compute some output to present the user.
To greatly simplify, if you have a mathematical function like
f(x) = 2x + 3
, you can supply said function with a number to get a new number, e.g,f(1) = 2 * 1 + 3 = 5
.LLMs are the exact same concept. They are a mathematical function, and you apply said function to input to produce output. Training is the process of a human writing a program to compute how said mathematical function should be defined, or in other words, the exact coefficients (also known as weights) to assign to each and every variable in said function (and the number of variables can easily be in the millions).
This is also, incidentally, why training is so resource intensive: repeatedly doing this multiplication for millions upon millions of variables is very expensive computationally and requires very specialized hardware to do efficiently. It happens to be the exact same kind of math used for computer graphics (matrix multiplication), which is why GPUs (or other even more specialized hardware) are so desired for training.
It should be pretty evident that every step of the process is completely controlled by humans. Computers always do precisely what they are told to do and nothing more, and that has been the case since their inception and will always continue to be the case. A model is a math function. It has no feelings, thoughts, reasoning ability, agency, or anything like that. Can
f(x) = x + 3
get a virus? Of course not, and the question makes absolutely no sense to ask. It’s exactly the same thing for LLMs. - Comment on Oncoliruses: LLM Viruses are the future and will be a pest, say good bye to decent tech. 2 months ago:
What does that even mean? It’s gibberish. You fundamentally misunderstand how this technology actually works.
If you’re talking about the general concept of models trying to outcompete one another, the science already exists, and has existed since 2014. They’re called Generative Adversarial Networks, and it is an incredibly common training technique.
It’s incredibly important not to ascribe random science fiction notions to the actual science being done. LLMs are not some organism that scientists prod to coax it into doing what they want. They intentionally design a network topology for a task, initialize the weights of each node to random values, feed in training data into the network (which, ultimately, is encoded into a series of numbers to be multiplied with the weights in the network), and measure the output numbers against some criteria to evaluate the model’s performance (or in other words, how close the output numbers are to a target set of numbers). Training will then use this number to adjust the weights, and repeat the process all over again until the numbers the model produces are “close enough”. Sometimes, the performance of a model is compared against that of another model being trained in order to determine how well it’s doing (the aforementioned Generative Adversarial Networks). But that is a far cry from models… I dunno, training themselves or something? It just doesn’t make any sense.
The technology is not magic, and has been around for a long time. There’s not been some recent incredible breakthrough, unlike what you may have been led to believe. The only difference in the modern era is the amount of raw computing power and sheer volume of (illegally obtained) training data being thrown at models by massive corporations. This has led to models that have much better performance than previous ones (performance, in this case, meaning "how close does it sound like text a human would write?), but ultimately they are still doing the exact same thing they have been for years.
- Comment on Oncoliruses: LLM Viruses are the future and will be a pest, say good bye to decent tech. 2 months ago:
sigh this isn’t how any of this works. Repeat after me: LLMs. ARE. NOT. INTELLIGENT. They have no reasoning ability and have no intent. They are parroting statistically-likely sequences of words based on often those sequences of words appear in their training data. It is pure folly to assign any kind of agency to them. This is speculative nonsense with no basis in actual technology. It’s purely in the realm of science fiction.
- Comment on ‘If I switch it off, my girlfriend might think I’m cheating’: inside the rise of couples location sharing 2 months ago:
This is like, the opposite of old-fashioned. Calling your wife when you’re on the way home is old-fashioned.
This article is the first time I’m actually hearing about this idea because it never even occurred to me as something people would actually want to do. I frankly don’t see the point of this nonsense. I would much rather talk to my wife on the phone and communicate with her about plans. It’s much more human and normal, and facilitates good communication habits. It takes 2 minutes to give my wife a call and, you know what, I get to talk to my wife! We don’t need technology invading absolutely every aspect of our lives. We don’t need to be constantly plugged in and attached to our phones at the hip.
It also has other downsides, like making it hard to surprise your partner, constant battery drain from the constant location chatter, etc. In fact, it seems like all downside with no actual benefit (setting aside the trust stuff, because it’s pretty irrelevant either way).
- Comment on [deleted] 2 months ago:
I assume you’re young?
Big streamers can make okay money, but to be honest it’s not really something to aspire to generally speaking. It’s not nearly as glamorous as it sounds. When you spend all of your time doing something that’s supposed to be relaxing/fun as a job and you can’t even necessarily do what you want anyway, it’s not really fun anymore. And beyond that, only a very small portion of people that attempt it actually make money from it and it’s much more about how you can manipulate social media platforms than it is anything about gaming.
- Comment on It's rude to show AI output to people 2 months ago:
I was trying to help onboard a new lead engineer and I was working through debugging his caddy config on Slack. I’m clearly putting in effort to help him diagnose his issue and he posts “I asked chatgpt and it said these two lines need to be reversed”, which was completely false (caddy has a system for reordering directives) and honestly just straight up insulting. Fucking pissed me off. People need to stop brining AI slop into conversations. It isn’t welcome and can fuck right off.
The actual issue? He forgot to restart his development server. 😡
- Comment on Thoughts?? 2 months ago:
Blows Excel out of the water, and it’s not even close. And it’s free, open source, and completely extensible (with Python, not some godforsaken excuse for a programming language).
- Comment on Firefox is fine. The people running it are not 2 months ago:
I use Firefox on mobile all the time. Works fine for me. The fact that I get a block on mobile makes it a no-brainer to use over chrome.