kescusay
@kescusay@lemmy.world
Developer and refugee from Reddit
- Comment on Linux Reaches 5% Desktop Market Share In USA 1 week ago:
Banning open source would basically destroy the entire Internet in the United States. No tech bro is going to want that.
- Comment on AI slows down some experienced software developers, study finds 1 week ago:
That’s fair. I guess what I hate is what the term represents, rather than the term itself.
- Comment on AI slows down some experienced software developers, study finds 1 week ago:
I actively hate the term “vibe coding.” The fact is, while using an LLM for certain tasks is helpful, trying to build out an entire, production-ready application just by prompts is a huge waste of time and is guaranteed to produce garbage code.
At some point, people like your coworker are going to have to look at the code and work on it, and if they don’t know what they’re doing, they’ll fail.
I commend them for giving it a shot, but I also commend them for recognizing it wasn’t working.
- Comment on AI slows down some experienced software developers, study finds 1 week ago:
Are you using agent mode?
- Comment on AI slows down some experienced software developers, study finds 1 week ago:
That’s still not actually knowing anything. It’s just temporarily adding more context to its model.
And it’s always very temporary. I have a yarn project I’m working on right now, and I used Copilot in VS Code in agent mode to scaffold it as an experiment. One of the refinements I included in the prompt file to build it is reminders throughout for things it wouldn’t need reminding of if it actually “knew” the repo.
- I had to constantly remind it that it’s a yarn project, otherwise it would inevitably start trying to use NPM as it progressed through the prompt.
- For some reason, when it’s in agent mode and it makes a mistake, it wants to delete files it has fucked up, which always requires human intervention, so I peppered the prompt with reminders not to do that, but to blank the file out and start over in it.
- The frontend of the project uses TailwindCSS. It could not remember not to keep trying to downgrade its configuration to an earlier version instead of using the current one, so I wrote the entire configuration for it by hand and inserted it into the prompt file. If I let it try to build the configuration itself, it would inevitably fuck it up and then say something completely false, like, “The version of TailwindCSS were using is still in beta, let me try downgrading to the previous version.”
I’m not saying it wasn’t helpful. It probably cut 20% off the time it would have taken me to scaffold out the app myself, which is significant. But it certainly couldn’t keep track of the context provided by the repo, even though it was creating that context itself.
Working with Copilot is like working with a very talented and fast junior developer whose methamphetamine addiction has been getting the better of it lately, and who has early onset dementia or a brain injury that destroyed their short-term memory.
- Comment on AI slows down some experienced software developers, study finds 2 weeks ago:
Like I said, I do find it useful at times. But not only shouldn’t it replace coders, it fundamentally can’t. At least, not without a fundamental rearchitecturing of how they work.
The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.
On top of that, spoken and written language are very imprecise, and there’s no way for an LLM to derive what you really wanted from context clues such as your tone of voice.
Take the phrase “fruit flies like a banana.” Am I saying that a piece of fruit might fly in a manner akin to how another piece of fruit, a banana, flies if thrown? Or am I saying that the insect called the fruit fly might like to consume a banana?
It’s a humorous line, but my point is serious: We unintentionally speak in ambiguous ways like that all the time. And while we’ve got brains that can interpret unspoken signals to parse intended meaning from a word or phrase, LLMs don’t.
- Comment on AI slows down some experienced software developers, study finds 2 weeks ago:
Experience software developer, here. “AI” is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I’m not worried about clobbering existing code) and I don’t want to do it by hand, it saves me time.
And… that’s about it. It sucks at code review, and will break shit in your repo if you let it.
- Comment on Companies That Tried to Save Money With AI Are Now Spending a Fortune Hiring People to Fix Its Mistakes 2 weeks ago:
It’s true, although the smart companies aren’t laying off workers in the first place, because they’re treating AI as a tool to enhance their productivity rather than a tool to replace them.
- Comment on Microsoft axe another 9000 in continued AI push 3 weeks ago:
The thing is, if they just pared those claims down a bit, they’d be accurate. Switch from “Copilot can build an entire application for you from scratch while giving you a blowjob” to “Copilot can help developers by automating some repetitive and time-consuming tasks,” and you still have a good thing.
- Comment on Microsoft axe another 9000 in continued AI push 3 weeks ago:
Weird. The cuts apparently include cancellation of several games that were planned and many of them will hit the Xbox division.
I would’ve thought that the increased productivity that Copilot theoretically gives developers would have resulted in the reduced staff still being able to finish those games.
- Comment on We need to stop pretending AI is intelligent 3 weeks ago:
Autocomplete on steroids, but suffering dementia.
- Comment on Your TV Is Spying On You 4 weeks ago:
Yep. My TV has not and never will be on the Internet in any way. I picked it for its screen quality, and the fact that it also has “smart” components never even entered into the decision. Because those smart components will literally never do anything.
- Comment on Vibe coding is to coding what microwaving is to cooking. 4 weeks ago:
I don’t think that comparison is apt. Unlike with music, there are objectively inefficient and badly-executed ways for a program to function, and if you’re only “vibing,” you’re not going to know the difference between such code and clean, efficient code.
Case in point: Typescript. Typescript is a language built on top of JavaScript with the intent of bringing strong and static type-checking sanity to it. Using Copilot, it’s possible to create a Typescript application without actually knowing the language. However, what you’ll end up with will almost certainly be full of the
any
type, which turns off type-checking and negates the benefits of using Typescript in the first place. Your code will be much harder to maintain and fix bugs in. And you won’t know that, because you’re not a Typescript developer, you’re a Copilot “developer.”I’m not trying to downplay the benefits of using Copilot. Like I said, it’s something I use myself, and it’s a really helpful tool in the developer toolbox. But it’s not the only tool in the toolbox for anyone but “vibe coders.”
- Comment on Vibe coding is to coding what microwaving is to cooking. 5 weeks ago:
I’m of two minds on this.
On the one hand, I find tools like Copilot integrated into VS Code to be useful for taking some of the drudgery out of coding. Case in point: If I need to create a new schema for an ORM, having Copilot generate it according to my specifications is speedy and helpful. It will be more complete and thorough than the first draft I’d come up with on my own.
On the other, the actual code produced by Copilot is always rife with errors and bloat, it’s never DRY, and if you’re not already a competent developer and try to “vibe” your way to usablility, what you’ll end up with will frankly suck, even if you get it into a state where it technically “works.”
Leaning into the microwave analogy, it’s the difference between being a chef who happens to have a microwave as one of their kitchen tools, and being a “chef” who only knows how to follow microwave instructions on prepackaged meals. “Vibe coders” aren’t coders at all and have no real grasp of what they’re creating or why it’s not as good as what real coders build, even if both make use of the same tools.
- Comment on Half of companies planning to replace customer service with AI are reversing course 5 weeks ago:
Seems like it’s cheaper and more efficient just to pay people to fuck on camera.
- Comment on Half of companies planning to replace customer service with AI are reversing course 5 weeks ago:
Oh my God… The best/worst thing about the idea of AI porn is how AI tends to forget anything that isn’t still on the screen. So now I’m imagining the camera zooming in on someone’s jibblies, then zooming out and now it’s someone else’s jibblies, and the background is completely different.
- Comment on The Case for Software Craftsmanship in the Era of Vibes — Zed's Blog 1 month ago:
The trick to using an AI agent effectively is already knowing exactly what you want, typing the request out in excruciating detail, and being a good developer who properly reviews code so you catch all the errors and repetition the AI agent will absolutely include.
So… Yeah. 100% agree. AI agents are useful, but impossible to use if you aren’t already skilled with code.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 1 month ago:
Well, technically, yes. You’re right. But they’re a specific, narrow type of neural network, while I was thinking of the broader class and more traditional applications, like data analysis. I should have been more specific.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 1 month ago:
That’s only part of the problem. Yes, JavaScript is a fragmented clusterfuck. Typescript is leagues better, but by no means perfect. Still, that doesn’t explain why the LLM can’t recall that I’m using Yarn while it’s processing the instruction that specifically told it to use Yarn. Or why it tries to start editing code when I tell it not to. Those are still issues that aren’t specific to the language.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 1 month ago:
But it still manages to fuck it up.
I’ve been experimenting with using Claude’s Sonnet model in Copilot in agent mode for my job, and one of the things that’s become abundantly clear is that it has certain types of behavior that are heavily represented in the model, so it assumes you want that behavior even if you explicitly tell it you don’t.
Say you’re working in a yarn workspaces project, and you instruct Copilot to build and test a new dashboard using an instruction file. You’ll need to include explicit and repeated reminders all throughout the file to use yarn, not NPM, because even though yarn is very popular today, there are so many older examples of using NPM in its model that it’s just going to assume that’s what you actually want - thereby fucking up your codebase.
I’ve also had lots of cases where I tell it I don’t want it to edit any code, just to analyze and explain something that’s there and how to update it… and then I have to stop it from editing code anyway, because halfway through it forgot that I didn’t want edits, just explanations.
- Comment on Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. 1 month ago:
I can envision a system where an LLM becomes one part of a reasoning AI, acting as a kind of fuzzy “dataset” that a proper neural network incorporates and reasons with, and the LLM could be kept real-time updated (sort of) with MCP servers that incorporate anything new it learns.
But I don’t think we’re anywhere near there yet.
- Comment on If AI was going to advance exponentially I'd of expected it to take off by now. 1 month ago:
Ah, did they finally fix it? I guess a lot of people were seeing it fail and they updated the model. Which version of ChatGPT was it?
- Comment on If AI was going to advance exponentially I'd of expected it to take off by now. 1 month ago:
Ask ChatGPT to list every U.S. state that has the letter ‘o’ in its name.
- Comment on If AI was going to advance exponentially I'd of expected it to take off by now. 1 month ago:
Not true. Not entirely false, but not true.
Large language models have their legitimate uses. I’m currently in the middle of a project I’m building with assistance from Copilot for VS Code, for example.
The problem is that people think LLMs are actual AI. They’re not.
My favorite example - and the reason I often cite for why companies that try to fire all their developers are run by idiots - is the capacity for joined up thinking.
Consider these two facts:
- Humans are mammals.
- Humans build dams.
Those two facts are unrelated except insofar as both involve humans, but if I were to say “Can you list all the dam-building mammals for me,” you would first think of beavers, then - given a moment’s thought - could accurately answer that humans do as well.
Here’s how it goes with Gemini right now:
Now Gemini clearly has the information that humans are mammals somewhere in its model. It also clearly has the information that humans build dams somewhere in its model. But it has no means of joining those two tidbits together.
Some LLMs do better on this simple test of joined-up thinking, and worse on other similar tests. It’s kind of a crapshoot, and doesn’t instill confidence that LLMs are up for the task of complex thought.
And of course, the information-scraping bots that feed LLMs like Gemini and ChatGPT will find conversations like this one, and update their models accordingly. In a few months, Gemini will probably include humans in its list. But that’s not a sign of being able to engage in novel joined-up thinking, it’s just an increase in the size and complexity of the dataset.
- Comment on Java at 30: How a language designed for a failed gadget became a global powerhouse 1 month ago:
I would argue that without consistent and enforced type hinting, dynamically typed languages offer very little benefit from type-checking at runtime. And with consistent, enforced type hinting, they might as well be considered actual statically typed languages.
Don’t get me wrong, that’s a good thing. Properly configured Python development environments basically give you both, even if I’m not a fan of the syntax.
- Comment on If AI was going to advance exponentially I'd of expected it to take off by now. 1 month ago:
It’s absolutely taking off in some areas. But there’s also an unsustainable bubble because AI of the large language model variety is being hyped like crazy for absolutely everything when there are plenty of things it’s not only not ready for yet, but that it fundamentally cannot do.
You don’t have to dig very deeply to find reports of companies that tried to replace significant chunks of their workforces with AI, only to find out middle managers giving ChatGPT vague commands weren’t capable of replicating the work of someone who actually knows what they’re doing.
That’s been particularly common with technology companies that moved very quickly to replace developers, and then ended up hiring them back because developers can think about the entire project and how it fits together, while AI can’t - and never will as long as the AI everyone’s using is built around large language models.
Inevitably, being able to work with and use AI is going to be a job requirement in a lot of industries going forward. Software development is already changing to include a lot of work with Copilot. But any actual developer knows that you don’t just deploy whatever Copilot comes up with, because - let’s be blunt - it’s going to be very bad code. It won’t be DRY, it will be bloated, it will implement things in nonsensical ways, it will hallucinate… You use it as a starting point, and then sculpt it into shape.
It will make you faster, especially as you get good at the emerging software development technique of “programming” the AI assistant via carefully structured commands.
And there’s no doubt that this speed will result in some permanent job losses eventually. But AI is still leagues away from being able to perform the joined-up thinking that allows actual human developers to come up with those structured commands in the first place, as a lot of companies that tried to do away with humans have discovered.
Every few years, something comes along that non-developers declare will replace developers. AI is the closest yet, but until it can do joined-up thinking, it’s still just a pipe-dream for MBAs.
- Comment on Java at 30: How a language designed for a failed gadget became a global powerhouse 1 month ago:
Hasn’t been updated since 2018. Does it still work?
- Comment on Java at 30: How a language designed for a failed gadget became a global powerhouse 1 month ago:
Oh, I know you can, but it’s optional and the syntax is kind of weird. I prefer languages that are strongly typed from the ground up and enforce it.
- Comment on Java at 30: How a language designed for a failed gadget became a global powerhouse 1 month ago:
Python is easy, but it can also be infuriating. Every time I use it, I’m reminded how much I loathe the use of whitespace to define blocks, and I really miss the straightforward type annotations of strong, non-dynamically typed languages.
- Comment on SignalFire: startups and Big Tech firms cut hiring of recent graduates by 11% and 25% respectively in 2024 vs. 2023, as AI can handle routine, low-risk tasks 1 month ago:
There’s inevitably going to be some rebounding from this. It’s probably true that the large language models these companies are betting their businesses on can do some of the things entry-level grads do, but we’ve already seen several of them fail because their MBAs didn’t realize that just barfing out code is only one part of what developers do.
Source: Am developer, currently working with LLMs and related tech, none of which would be able to get anywhere without someone like me doing the work.