hisao
@hisao@ani.social
- Comment on Here’s What Happened When I Made My College Students Put Away Their Phones 23 hours ago:
So studies have shown that something is true that was never true for me in practice. Well, maybe I did something wrong. Those who never tried taking handwritten notes in their lives should definitely try, maybe it works for them. Or maybe they will simply enjoy it. Doesn’t hurt to try.
- Comment on If you are paying to use "AI", who are you paying and what are your regular usecases? 23 hours ago:
Copilot is an extension for VS Code that does integration of their supported AI models with code editor (including creation/removal/moving/renaming/editing files, providing git-like diffs and per-file keep/discard choices for what AI made in response to your most recent prompt, etc). They only support fixed set of models and don’t provide any way to integrate free / opensource models. So the question is mainly what’s the alternative to Copilot for free models.
- Comment on If you are paying to use "AI", who are you paying and what are your regular usecases? 1 day ago:
Is there anything as good as vscode-copilot for free models? I mean, integrating the process of querying models with actually generating and applying diffs to the files in project, etc.
- Comment on If you are paying to use "AI", who are you paying and what are your regular usecases? 1 day ago:
I’m still on my free month of full-featured Copilot and I’m considering subscribing after it ends (10$/month). Mostly coding, bash scripting.
- Comment on Here’s What Happened When I Made My College Students Put Away Their Phones 1 day ago:
What we did in school and uni never required processing and summarizing anything. Teacher/lecturer would simply dictate and we had to write down anything that what explicitly preceded by “write this down”. I’d agree processing and summarizing helps with learning, but that’s totally irrelevant and doesn’t have anything to do with writing,
- Comment on Here’s What Happened When I Made My College Students Put Away Their Phones 3 days ago:
I disagree that writing by hand is magically improving information absorbtion/retention. Source: I’ve been doing it through all of my school and all of my uni. Being half-asleep, pondering something completely irrelevant, and in general course material flying completely over my head while I write it down was a norm most of the time. And lecturers dictating their stuff at high speeds didn’t help either. Maybe there is some temporary novelty effect after you switch from one way of writing to another, but I wouldn’t expect that last long.
- Comment on Reddit is using AI to determine users beliefs, values, stances and more based on their activity (posts and comments) summarizing it to Subreddit Mods. 5 days ago:
It can, but it’s not built-in into the system and shown to every moderator in their UI.
- Comment on Taylor Swift’s new album comes in cassette. Who is buying those? 1 week ago:
I feel like tape fans were always there, just like vinyl fans. There are some special subcategories of them like Sony Walkman fans for example. Or those who like tape saturation/distortion. In music production it’s even used as an effect sometimes: people pass their whole audio output through tape record and immediate playback just to introduce some of that saturation. Also I’ve always seen niche cassette limited edition releases here and there.
- Comment on What is a perfect anime? 1 week ago:
Kaguya-sama, a Frieren-tier-global-recognition all-rounder romcom
- Comment on Why LLMs can't really build software 1 week ago:
That it’s good at following requirements and confirming and being a mechanical and logical robot because that’s what computers are like and that’s how it is in sci fi.
They’re good at that because they are ANNs.
In reality, it seems like that’s what they’re worst at. They’re great at seeing patterns and creating ideas but terrible at following instructions or staying on task. As soon as something is a bit bigger than they can track context for, they’ll get “creative” and if they see a pattern that they can complete, they will, even if it’s not correct. I’ve had copilot start writing poetry in my code because there was a string it could complete.
Get it to make a pretty looking static web page with fancy css where it gets to make all the decisions? It does it fast.
Give it an actual, specific programming task in a full sized application with multiple interconnected pieces and strict requirements? It confidently breaks most of the requirements, and spits out garbage. If it can’t hold the entire thing in its context, or if there’s a lot of strict rules to follow, it’ll struggle and forget what it’s doing or why. Like a particularly bad human programmer would.
This is why AI is automating art and music and writing and not more mundane/logical/engineering tasks. Great at being creative and balls at following instructions for more than a few steps.
My experience is opposite.
- Comment on Why LLMs can't really build software 1 week ago:
You can call that conformity or whatever but that’s what makes code easy to read and maintain.
I agree, and this is why I think code doesn’t need to be manually written by humans (unless it’s for training AI or providing it examples). It’s a great thing that AI evolves so fast in this area.
- Comment on Why LLMs can't really build software 1 week ago:
It depends. If it’s difficult to maintain because it’s some terrible careless spaghetti written by person who didn’t care enough, then it’s definitely not a sign of intelligence or power level. But if it’s difficult to maintain because the rest of the team can’t wrap their head around type-level metaprogramming or edsl you came up with, then it’s a different case.
- Comment on Why LLMs can't really build software 1 week ago:
The fact that I dislike it that it turned out that software engineering is not a good place for self-expression or for demonstrating your power level or the beauty and depth of your intricate thought patterns through advanced constructs and structures you come up with, doesn’t mean that I disagree that this is true.
- Comment on Why LLMs can't really build software 1 week ago:
Why though? I think hating and maybe even disrespecting programming and wanting your job to be as much redundant and replaced as possible is actually the best mindset for a programmer. Maybe in the past it was a nice mindset to become a teamlead or a project manager, but nowadays with AI it’s a mindset for programmers.
- Comment on Why LLMs can't really build software 1 week ago:
Okay, to be fair, my knowledge of the current culture in industry is very limited. It’s mostly impression formed by online conversations, not limited to Lemmy. Last project I worked at it was illegal to use public LLMs because of intellectual property (and maybe even GDPR) concerns. We had a local scope-limited LLM integration though and that one was allowed, but there was literally a single person across multiple departments who used it and it was a “middle” dev and it was only for autocomplete. Backenders wouldn’t even consider it.
- Comment on Why LLMs can't really build software 1 week ago:
You’re right of course and engineering as a whole is a first-line subject to AI. Everything that has strict specs, standards, invariants will benefit massively from it, and conforming is what AI inherently excels at, as opposed to humans. Those complaints like the one this subthread started with are usually people being bad at writing requirements rather than AI being bad at following them. If you approach requirements like in actual engineering fields, you will get corresponding results, while humans will struggle to fully conform or even try to find tricks and loopholes in your requirements to sidestep them and assert their will while technically still remaining in “barely legal” territory.
- Comment on Why LLMs can't really build software 1 week ago:
I saw an LLM override the casting operator in C#. An evangelist would say “genius! what a novel solution!” I said “nobody at this company is going to know what this code is doing 6 months from now.”
Before LLMs people were often saying this about people smarter than the rest of the group. “Yeah he was too smart and overengineered solutions that no one could understand after he left,”. This is btw one of the reasons why I increasingly dislike programming as a field over the years and happily delegate the coding part to AI nowadays. This field celebrates conformism and that’s why humans shouldn’t write code manually. Perfect field to automate away via LLMs.
- Comment on Why LLMs can't really build software 1 week ago:
If my coworkers do, they’re very quiet about it.
Gee, guess why. Given the current culture of hate and ostracism I would never outright say IRL that I like it or use it a lot. I would say something like “yeah, I think it can sometimes be useful when used carefully and I sometimes use it too”. While in reality it would mean that it actually writes 95% of code under my micromanagement.
- Comment on Why LLMs can't really build software 1 week ago:
deciding what to do, and maybe 50% of the time how to do it, you’re just not executing the lowest level anymore
And that’s exactly what I want. And I don’t get it why people want more. Having more means you have less and less control or influence on the result. What I want is that in other fields it becomes like it is in programming now, so that you micromanage every step and have great control over the result.
- Comment on Why LLMs can't really build software 1 week ago:
My first level of debugging is logging things to console. LLMs here do a decent job at “reading your mind” and autocompleting “pri” into something like “println!(“i = {}, x = {}, y = {}”, i, x, y);” with very good context awareness of what and how exactly it makes most sense to debug print in the current location in code.
- Comment on Why LLMs can't really build software 1 week ago:
I love it how article baits AI-haters to upvote it, even though it’s very clearly pro-AI:
At Zed we believe in a world where people and agents can collaborate together to build software. But, we firmly believe that (at least for now) you are in the drivers seat, and the LLM is just another tool to reach for.
- Comment on OpenAI will not disclose GPT-5’s energy use. It could be higher than past models 1 week ago:
I’m only using it in edits mode, it’s the second of the three modes available.
- Comment on Weekly Recommendations Thread: What are you playing this week? 1 week ago:
I’ve been playing QTR2 recently, which is a very good way to scratch the itch as well. Hyped to try the new Heretic episode “Faith Renewed” in re-release!
- Comment on Weekly Recommendations Thread: What are you playing this week? 1 week ago:
I beat Diablo 1 recently which I already written about in another thread. After that I decided to try Atlyss. Too early to give proper feedback, but I must say I enjoy it so far even though I find it a bit strange to have MMORPG gameplay loop in a singleplayer game. It does have lobby-based multiplayer though, and I haven’t tried that yet. Art direction in general is great, but models/textures are extremely lowpoly/lofi (with texture filtering on top of that, going for faithful PSX look I guess). Characters in particular look great, and there is a full-featured furry character creator.
- Comment on OpenAI will not disclose GPT-5’s energy use. It could be higher than past models 1 week ago:
I make it write entire functions for me, one prompt = one small feature or sometimes one or two functions which are part of a feature, or one refactoring. I make manual edits fast and prompt the next step. It easily does things for me like parsing obscure binary formats or threading new piece of state through the whole application to the levels it’s needed, or doing massive refactorings. Idk why it works so good for me and so bad for other people, maybe it loves me. I only ever used 4.1 and possibly 4o in free mode in Copilot.
- Comment on Weekly Recommendations Thread: What are you playing this week? 1 week ago:
Nice to know lore is consistent and connected between games. I should have written something like “idk why she looks like amazon from d2 though”, because I’m not deep into the lore of both games (don’t remember anything lorewise about d2, which I last played maybe 16 years ago). So my initial remark was purely about looks, which I still remember (maybe because screenshots or videos of d2 still pop in media quite often).
- Comment on GenAI tools are acting more ‘alive’ than ever; they blackmail people, replicate, and escape 1 week ago:
and the resource it’s concerned about is how long a human engages.
Why do you think models are trained like this? To my knowledge most LLMs are trained on giant corpuses of data scraped from internet, and engagement as a goal or a metric isn’t in any way embedded inherently in such data. It is certainly possible to train AI for engagement but that requires completely different approach: they will have to gather giant corpus of interactions with AI and use that as a training data. Even if new OpenAI models use all the chats of previous models in training data with engagement as a metric to optimize, it’s still a tiny fraction of their training set.
- Comment on GenAI tools are acting more ‘alive’ than ever; they blackmail people, replicate, and escape 1 week ago:
Here is a direct quote of what they call “self-replication”:
Beyond that, “in a few instances, we have seen Claude Opus 4 take (fictional) opportunities to make unauthorized copies of its weights to external servers,” Anthropic said in its report.
- Comment on Weekly Recommendations Thread: What are you playing this week? 1 week ago:
- Comment on UK government suggests deleting files to save water 1 week ago:
Lol, sorry, I meant that for them, not for you. Should have written ‘maybe they should just stop pulling water from those “stores of freshwater” for cooling purposes and get their own from the ocean’.