expr
@expr@programming.dev
- Comment on There's a brand new Dreamcast game that's out, called Mute Crimson, and it's free 1 day ago:
When I was a kid I had a disc that had some 100-200 SNES games burned onto it. No idea how we got it… My guess is my dad got it from a friend or something.
I played the shit out of Earth Defense Force, Knights of the Round, and Gundam Wing: Endless Duel with my Dreamcast. Though my mom made me stop playing Gundam Wing because one of the Gundam was called Deathscythe and Satanic Panic was in full swing.
- Comment on xkcd #3142: -Style Pizza 1 day ago:
Chicago-style pizza too.
But yeah, the diversity isn’t as significant as the comic would lead you to believe.
- Comment on 5 Signs the AI Bubble is About to Burst 2 days ago:
Just because a lot of people are using them does not necessarily mean they are actually valuable. You’re claim assumes that people are acting rationally regarding them. But that’s an erroneous assumption to make.
People are falling in “love” with them. Asking them for advice about mental health. Treating them like they are some kind of all-knowing oracle (or even having any intelligence whatsoever), when in reality they know nothing and cannot reason at all.
Ultimately they are immensely effective at creating a feedback loop that preys on human psychology and reinforces a dependency on it. It’s a bit like addiction in that way.
- Comment on O no 4 days ago:
It’s just a different kind of difficulty, and of course not all jobs are created equal. But ultimately this rhetoric is the kind of thing the capitalists want. They want to pit “lower class” against “upper class”, when in reality, these distinctions are entirely irrelevant and it’s actually “the billionaire oligarchy squeezing every last drop out of the rest of us”. If you work for a living, you are “low class”.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 4 days ago:
So completely unqualified to speak to the experience of being a software engineer? Ok.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 4 days ago:
So you’ve just been talking out of your ass for the whole thread? That explains a lot.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 5 days ago:
Yeah, fortunately while our CTO is giddy like a schoolboy about LLMs, he hasn’t actually attempted to force it on anyone, thankfully.
Unfortunately, a number of my peers now seem to have become irreparably LLM-brained.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 5 days ago:
It is, actually. The entire point of what I was saying is that you have all these engineers now that reflexively jump straight to their LLM for anything and everything. Using their brains to simply write some code themselves doesn’t even occur to them as an something they should do. Much like you do, by the sounds of it.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 6 days ago:
No, good engineers were not constantly googling problems because for most topics, either the answer is trivial enough that experienced engineers could answer them immediately, or complex and specific enough to the company/architecture/task/whatever that Googling it would not be useful. Stack overflow and the like has always only ever really been useful as the occasional memory aid for basic things that you don’t use often enough to remember how to do. Good engineers were, and still are, reasoning through problems, reading documentation, and iteratively piecing together system-level comprehension.
The nature of the situation hasn’t changed at all: problems are still either trivial enough that an LLM is pointless, or complex and specific enough that an LLM will get it wrong. The only difference is that an LLM will spit out plausible-sounding bullshit and convince people it’s valuable when it is, in fact, not.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 6 days ago:
Honestly, it’s heartbreaking to see so many good engineers fall into the hype and seemingly unable to climb out of the hole. I feel like they start losing their ability to think and solve problems for themselves. Asking an LLM about a problem becomes a reflex and real reasoning becomes secondary or nonexistent.
Executives are mostly irrelevant as long as they’re not forcing the whole company into the bullshit.
- Comment on "Rizz", "cooking" and "based" are going to be stereotypical old people words one day 1 week ago:
Based has been around forever, it’s not some new slang.
- Comment on The line between what is ai and what is programming will be very blurred in the future 1 week ago:
Not only does it not speed up work, it actually slows down work: metr.org/…/2025-07-10-early-2025-ai-experienced-o…
- Comment on Feeling insecure about going to a 'girlie pop' concert as a 30 year old man, am i overthinking it? 1 month ago:
I’m also in my 30s. I’ve been to a bunch of “girly” concerts with my wife and have had a great time at all of them.
It’s much easier to enjoy life when you let go of notions of what you should or should not be enjoying. Music doesn’t need to be gendered. You can just enjoy it for what it is.
In fact, I’d extend the idea to countless other facets of life: there’s so much pointless gendering in society that does a huge disservice to everyone, men included. I’ll give you a dumb example: I used to hold the notion in my younger years that if I were given a purse to hold, that I had to hold the purse in such a way to telegraph that it wasn’t actually my purse. Like grasp it like some kind of ape man or something. Like… What is the fucking point in that? It’s so goddamn dumb and childish. Now I often take turns holding my wife’s purse (it can be a bit heavy because it also doubles as a diaper bag for our toddler) and don’t give a single fuck about doing so.
I can give you countless other examples where I was raised with incredibly damaging ideas ultimately stemming from toxic masculinity that I have painstakingly excised from my psyche.
- Comment on Meet the AI vegans: They are choosing to abstain from using artificial intelligence for environmental, ethical and personal reasons. Maybe they have a point 1 month ago:
Yeah, pretty disappointing to see from the Guardian.
- Comment on Should I unplug my smart tv from the internet? 1 month ago:
Um yeah, definitely. My TV has never had Internet connectivity, not should it.
- Comment on Oncoliruses: LLM Viruses are the future and will be a pest, say good bye to decent tech. 1 month ago:
I obviously understand that they are AI in the original computer science sense. But that is a very specific definition and a very specific context. “Intelligence” as it’s used in natural language requires cognition, which is something that no computer is capable of. It implies an intellect and decision-making ability. None of which computers posses.
We absolutely need to dispel this notion because it is already doing a great deal of harm all over. This language absolutely contributed to the scores of people that misuse and misunderstand it.
- Comment on Oncoliruses: LLM Viruses are the future and will be a pest, say good bye to decent tech. 1 month ago:
Again, more gibberish.
It seems like all you want to do is dream of fantastical doomsday scenarios with no basis in reality, rather than actually engaging with the real world technology and science and how it works. It is impossible to infer what might happen with a technology without first understanding the technology and its capabilities.
Do you know what training actually is? I don’t think you do. You seem to be under the impression that a model can somehow magically train itself. That is simply not how it works. Humans write programs to train models (Models, btw, are merely a set of numbers. They aren’t even code!).
When you actually use a model: here’s what’s happening:
- The interface you are using takes your input and encodes it as a sequence of numbers (done by a program written by humans)
- This sequence of numbers (known as a vector, in mathematics) is multiplied by the weights of the model (organized in a matrix, which is basically a collection of vectors), resulting in a new sequence of numbers (the output vector) (done by a program written by humans).
- This output vector is converted back into the representation you supplied (so if you gave a chatbot some text, it will turn the numbers into the equivalent textual representation of said numbers) (done by a program written by humans).
So a “model” is nothing more than a matrix of numbers (again, no code whatsoever), and using a model is simply a matter of (a human-written program) doing matrix multiplication to compute some output to present the user.
To greatly simplify, if you have a mathematical function like
f(x) = 2x + 3
, you can supply said function with a number to get a new number, e.g,f(1) = 2 * 1 + 3 = 5
.LLMs are the exact same concept. They are a mathematical function, and you apply said function to input to produce output. Training is the process of a human writing a program to compute how said mathematical function should be defined, or in other words, the exact coefficients (also known as weights) to assign to each and every variable in said function (and the number of variables can easily be in the millions).
This is also, incidentally, why training is so resource intensive: repeatedly doing this multiplication for millions upon millions of variables is very expensive computationally and requires very specialized hardware to do efficiently. It happens to be the exact same kind of math used for computer graphics (matrix multiplication), which is why GPUs (or other even more specialized hardware) are so desired for training.
It should be pretty evident that every step of the process is completely controlled by humans. Computers always do precisely what they are told to do and nothing more, and that has been the case since their inception and will always continue to be the case. A model is a math function. It has no feelings, thoughts, reasoning ability, agency, or anything like that. Can
f(x) = x + 3
get a virus? Of course not, and the question makes absolutely no sense to ask. It’s exactly the same thing for LLMs. - Comment on Oncoliruses: LLM Viruses are the future and will be a pest, say good bye to decent tech. 1 month ago:
What does that even mean? It’s gibberish. You fundamentally misunderstand how this technology actually works.
If you’re talking about the general concept of models trying to outcompete one another, the science already exists, and has existed since 2014. They’re called Generative Adversarial Networks, and it is an incredibly common training technique.
It’s incredibly important not to ascribe random science fiction notions to the actual science being done. LLMs are not some organism that scientists prod to coax it into doing what they want. They intentionally design a network topology for a task, initialize the weights of each node to random values, feed in training data into the network (which, ultimately, is encoded into a series of numbers to be multiplied with the weights in the network), and measure the output numbers against some criteria to evaluate the model’s performance (or in other words, how close the output numbers are to a target set of numbers). Training will then use this number to adjust the weights, and repeat the process all over again until the numbers the model produces are “close enough”. Sometimes, the performance of a model is compared against that of another model being trained in order to determine how well it’s doing (the aforementioned Generative Adversarial Networks). But that is a far cry from models… I dunno, training themselves or something? It just doesn’t make any sense.
The technology is not magic, and has been around for a long time. There’s not been some recent incredible breakthrough, unlike what you may have been led to believe. The only difference in the modern era is the amount of raw computing power and sheer volume of (illegally obtained) training data being thrown at models by massive corporations. This has led to models that have much better performance than previous ones (performance, in this case, meaning "how close does it sound like text a human would write?), but ultimately they are still doing the exact same thing they have been for years.
- Comment on Oncoliruses: LLM Viruses are the future and will be a pest, say good bye to decent tech. 1 month ago:
sigh this isn’t how any of this works. Repeat after me: LLMs. ARE. NOT. INTELLIGENT. They have no reasoning ability and have no intent. They are parroting statistically-likely sequences of words based on often those sequences of words appear in their training data. It is pure folly to assign any kind of agency to them. This is speculative nonsense with no basis in actual technology. It’s purely in the realm of science fiction.
- Comment on ‘If I switch it off, my girlfriend might think I’m cheating’: inside the rise of couples location sharing 1 month ago:
This is like, the opposite of old-fashioned. Calling your wife when you’re on the way home is old-fashioned.
This article is the first time I’m actually hearing about this idea because it never even occurred to me as something people would actually want to do. I frankly don’t see the point of this nonsense. I would much rather talk to my wife on the phone and communicate with her about plans. It’s much more human and normal, and facilitates good communication habits. It takes 2 minutes to give my wife a call and, you know what, I get to talk to my wife! We don’t need technology invading absolutely every aspect of our lives. We don’t need to be constantly plugged in and attached to our phones at the hip.
It also has other downsides, like making it hard to surprise your partner, constant battery drain from the constant location chatter, etc. In fact, it seems like all downside with no actual benefit (setting aside the trust stuff, because it’s pretty irrelevant either way).
- Comment on [deleted] 1 month ago:
I assume you’re young?
Big streamers can make okay money, but to be honest it’s not really something to aspire to generally speaking. It’s not nearly as glamorous as it sounds. When you spend all of your time doing something that’s supposed to be relaxing/fun as a job and you can’t even necessarily do what you want anyway, it’s not really fun anymore. And beyond that, only a very small portion of people that attempt it actually make money from it and it’s much more about how you can manipulate social media platforms than it is anything about gaming.
- Comment on It's rude to show AI output to people 1 month ago:
I was trying to help onboard a new lead engineer and I was working through debugging his caddy config on Slack. I’m clearly putting in effort to help him diagnose his issue and he posts “I asked chatgpt and it said these two lines need to be reversed”, which was completely false (caddy has a system for reordering directives) and honestly just straight up insulting. Fucking pissed me off. People need to stop brining AI slop into conversations. It isn’t welcome and can fuck right off.
The actual issue? He forgot to restart his development server. 😡
- Comment on Thoughts?? 2 months ago:
Blows Excel out of the water, and it’s not even close. And it’s free, open source, and completely extensible (with Python, not some godforsaken excuse for a programming language).
- Comment on Firefox is fine. The people running it are not 2 months ago:
I use Firefox on mobile all the time. Works fine for me. The fact that I get a block on mobile makes it a no-brainer to use over chrome.
- Comment on Is anyone else not feeling that patriotic for July 4? 2 months ago:
That’s the majority of Americans. Beyond what was almost certainly a stolen election (large scale, billionaire-bankrolled propaganda, campaigns, voter disenfranchisement, and probably voting machine manipulation), Trump’s disapproval rating since starting that shit has skyrocketed.
We are in an awful fascist quagmire of a situation that we are going to have to fight to free ourselves from, but that doesn’t mean that the actions of this administration actually represent us.
- Comment on Is anyone else not feeling that patriotic for July 4? 2 months ago:
Not sure what you’re referring to, but the 4th hasn’t really changed. Maybe you’re confusing it with the (laughable) military parade Trump did for his own birthday?
Personally I’ve long found patriotism to be a pretty abhorrent concept, but I’ve always enjoyed the opportunity to spend time with my family regardless. To me, the 4th is much more about community than it is the country. And while this country is fucking awful, I do have a pretty great community around me that I’m grateful for.
- Comment on 2 months ago:
Yeah that’s a complete myth and not based on actual science.
- Comment on [deleted] 2 months ago:
It’s perfectly reasonable to not want to sleep over at your parents’ house after only a month of dating. To be honest, it’s reasonable to not ever want to do that. It’s weird sleeping in someone else’s house period.
But especially after just a month of dating, your parents may as well be strangers to him. He likely doesn’t have any sense for any cultural differences between how he was raised and your family, like what behaviors are considered faux pas to your parents, etc.
To be honest I think you’re really getting ahead of yourself. Take your time with the relationship and build trust and the foundations of a great relationship. It always takes time and patience. You guys are still just starting to learn about each other.
- Comment on [deleted] 2 months ago:
I met my wife on a dating app in 2019 on Bumble (28 at the time). It can work, but you have to be willing to sift through a lot of bullshit and be patient. You also need to be able to handle rejection and mistreatment (like getting stood up/ghosted). It’s ultimately a numbers game and it takes time to find someone that is actually right for you.
I expect it’s probably also not nearly as bad for older age groups. At your age, I think people are going to be a lot more likely to be direct and know what they want.
My advice is to try it out. Worst case, you decide it’s not for you and try something else.
- Comment on Driving through Nebraska twice nearly broke me. The people who live there must be among the hardest motherfuckers alive. 2 months ago:
Yeah the city is progressive. Tornadoes… Not as much as you would think. There was one last year that hit the outskirts of town a bit. Last one to do any damage before that was like 2017.