jbloggs777
@jbloggs777@discuss.tchncs.de
Just a regular Joe.
- Comment on I vibe coded a driver to monitor and control the temperatures and fans in my Aoostar WTR PRO. 2 weeks ago:
spits on ground, squints reeeeaaal slow
Now listen here sonny… we don’t take kindly to none of that AI business around these parts.
We is simple, God fearin’ folk, raised on sweat, dirt, and good honest labor, not all of that fancy machine learnin’ contraption nonsense.
Ain’t no place for thinkin’ machines where a man’s meant to use his own two hands. So I reckon I’ll mosey on over and downvote this here post myself, nice and proper.
- Comment on “ChatGPT said this” Is Lazy 2 weeks ago:
The funny thing is, you rarely notice those who actually use it effectively in formulating comms, or writing code, or solving real world problems. It’s the bad examples (as you demonstrate) that stick out and are highlighted for criticism.
Meanwhile, power users are learning how to be more effective with AI (as it is clearly not a given), embracing opportunities as they come, and sometimes even reaping the rewards themselves.
- Comment on “ChatGPT said this” Is Lazy 2 weeks ago:
Heh. I often use LLMs to strip out the unnecessary and help tighten my own points. I fully agree that most people are terrible at writing bug reports (or asking for meaningful help), and LLMs are often GIGO.
I think the rule applies that if you cannot do it yourself, then you can’t expect an LLM to do it better, simply because you cannot judge the result. In this case, you are more likely to waste other people’s time.
On the other side, it is possible to have agents give useful feedback on bug reports, request tickets, etc. and guide people (and their personal AI) to provide all the needed info and even automatically resolve issues. So long as the agent isn’t gatekeeping and a human is able to be pulled in easily. And honestly, if someone really wants to speak to a person, that is OK and shouldn’t require jumping through hoops.
- Comment on “ChatGPT said this” Is Lazy 2 weeks ago:
It’s a problem if they claimed it as their own or it didn’t add value, and (2) it wasted your time as a result.
Sometimes the experts just know how to search more effectively in their domain (which nowadays is increasingly using the right context/prompt with some AI, and formerly known as Google-Fu before google search turned to shit)
To be genuinely helpful and polite, they’ll do a little legwork to respond personally and accurately… others might just be dicks who don’t respect your time.
Try not to be that dick yourself, though. If you are asking someone for help, show your work and provide relevant info so they don’t waste their time.
- Comment on “ChatGPT said this” Is Lazy 2 weeks ago:
Sure… copy & paste is copy & paste.
However, LLMs can help to formulate a scattered braindump of thoughts and opinions into a coherent argument / position, fact check claims, and help to highlight faulty thinking.
I am happy if someone uses AI first to come up with a coherent message, bug report, or question.
I am annoyed if it’s ill-researched/understood nonsense, AI assisted or not.
Within my company, I am contributing to an AI-tailored knowledge base, so that people (and AI) can efficiently learn just-in-time.
- Comment on Firefox's beta feature "Smart Window" shared browsing and search history to AI models without prompting 2 weeks ago:
If your company has an enterprise/privacy agreement with Adobe, it might be considered addressed, similar to the millions of companies using Microsoft 365 and Sharepoint.
If, OTOH, it’s a “free” feature of Adobe, it could be eating your company’s data without constraints.
If the latter, let us know your company’s name so that we can avoid it.
- Comment on Coding After Coders: The End of Computer Programming as We Know It 2 weeks ago:
Yeah, if you want it to write code that will act on important data or outside a sandbox, then a code review is still advised, even if only a sniff-test.
- Comment on AI social platforms like Moltbook are potential accelerators of existential risk that should be regulated as critical infrastructure 2 weeks ago:
No moltbook until you are at least 16 seconds old, young bot.
- Comment on Coding After Coders: The End of Computer Programming as We Know It 2 weeks ago:
If it’s run in a good sandbox, it’ll be safer than most of the code you run.
Then you add in controlled interfaces/gateways to give it “just enough” power to do something interesting… and you audit the hell out of those.
Risk is something that has to be managed, because it usually can’t be eliminated.
- Comment on After outages, Amazon to make senior engineers sign off on AI-assisted changes 2 weeks ago:
It’ll be temporary, a gut reaction to add more experienced engineers in the loop. These folks will try to codify and then push better checks/guardrails into CI/CD and tooling to save themselves time. Given how new this all is, it’s almost the blind leading the blind though.
Amazon might also have some poor system boundaries, leading to non-critical systems/code impacting critical systems. Or they just let junior devs with their AI tools run wild on critical components without adequate guardrails… also likely. :-P
- Comment on Popular self-hosting services worth running 3 weeks ago:
You sir, need an AI agent to maintain your self-hosting addiction and free you from the shackles of homelab responsibility. Automate the automations that maintain the automations. That’s the real endgame. /s
- Comment on Claude Code deletes developers' production setup, including its database and snapshots — 2.5 years of records were nuked in an instant 3 weeks ago:
It would be interesting to see the logs of your sessions, and compare them to the session logs of happy/productive-AI-coders.
I suspect that some people just think and express themselves in ways that don’t vibe with LLMs. eg. Men are from Mars, AI coding agents are from Venus.
- Comment on AI chatbots provide less-accurate information to vulnerable users: Research finds leading AI models perform worse for users with lower English proficiency, less formal education, and non-US origins. 5 weeks ago:
Indeed. Additional context will influence the response, and not always in predictable ways… which can be both interesting and frustrating.
The important thing is for users to have sufficient control, so they can counter (or explore) such weirdness themselves.
Education is key, and there’s no shortage of articles and guides for new users.
- Comment on AI chatbots provide less-accurate information to vulnerable users: Research finds leading AI models perform worse for users with lower English proficiency, less formal education, and non-US origins. 5 weeks ago:
Bio and memory are optional in ChatGPT though. Not so in others?
The age guessing aspect will be interesting, as that is likely to be non-optional.
- Comment on AI chatbots provide less-accurate information to vulnerable users: Research finds leading AI models perform worse for users with lower English proficiency, less formal education, and non-US origins. 5 weeks ago:
The LLMs aren’t being assholes, though - they’re just spewing statistical likelihoods. While I do find the example disturbing (and I could imagine some deliberate bias in training), I suspect one could mimic it with different examples with a little effort - there are many ways to make an LLM look stupid. It might also be tripping some safety mechanism somehow. More work to be done, and it’s useful to highlight these cases.
I bet if the example bio and question were both in russian, we’d see a different response.
But as a general rule: Avoid giving LLMs irrelevant context.
- Comment on AI chatbots provide less-accurate information to vulnerable users: Research finds leading AI models perform worse for users with lower English proficiency, less formal education, and non-US origins. 5 weeks ago:
I agree. What you get with chatbots is the ability to iterate on ideas & statements first without spreading undue confusion. If you can’t clearly explain an idea to a chatbot, you might not be ready to explain it to a person.
- Comment on Tesla Switches Full Self-Driving to Subscription Only 5 weeks ago:
They rolled this update out mid-journey, and I had to scramble to swap seats with the manequin driver. Not cool, Elon.
Not. Cool.
- Comment on Consumer hardware is no longer a priority for manufacturers 1 month ago:
There is plenty of consumer hardware that is supported on Linux, or will be as soon as a kernel developer gets their hands on it, reverse engineers the protocol if necessary, and adds support. For things like keyboards, there are often proprietary extensions (eg. for built-in displays, macros, etc.). It pays to check for Linux support before buying hardware though. Sometimes it’s not the kernel drivers, but supporting software (eg. Steam input) that might not support it.
First class vendor support for Linux is more common for niche/premium hardware designed in the west, than cheap chinese knockoffs that follow it. Long term customer support is not their strong suit.
- Comment on Consumer hardware is no longer a priority for manufacturers 1 month ago:
Sure… but why would el cheapo hardware want/need to support proprietary drivers? Now, for premium hardware and software, they might still want vendor lock-in mechanisms… So unless you absolutely have to, you should avoid hardware on Linux that needs proprietary drivers.
- Comment on Consumer hardware is no longer a priority for manufacturers 1 month ago:
That’s capitalism for you. But also Linux, where it’s typical to upstream hardware support and rely on existing ecosystems rather than release addon drivers or niche supporting apps.
China has made some strategic investments in Linux over the years though – often domestically targeted, like Red Flag Linux, and drivers for chinese hardware, etc.
- Comment on China reveals 200-strong AI drone swarm that can be controlled by a single soldier — ‘intelligent algorithm’ allows individual units to cooperate autonomously even after losing communication with oper 2 months ago:
One of the interesting use-cases for LLMs is to find potential inconsistencies (across many sources), brainstorm abuse vectors & potential legal challenges, and then rewrite natural (including legal) language in a less ambiguous way. If this process were guided and vetted by talented lawmakers, it could be quite a useful tool, and is probably already used that way in many quarters.
The current executive will almost certainly abuse it and come up with hilariously bad proposals, vetted only by a marketing team, which will be ridiculed for years to come. Popcorn time.
- Comment on Asking the right questions... 3 months ago:
salutes
- Comment on Asking the right questions... 3 months ago:
It’s accelerating trends that have already been well underway in the world, with the US leading the pack, and doubling down on its own demise (and apparently also working toward the active demise of European Democracy and Freedom) under trump and jd vance.
The analogy I always think of is: We’ve got shovels and we are in a big hole … which way are we going to dig? In my experience, most people keep digging down because it seems easier now, and eventually find themselves in a deeper hole.
- Comment on I Went All-In on AI. The MIT Study Is Right. 3 months ago:
That this is and will be abused is not in question. :-P
- Comment on I Went All-In on AI. The MIT Study Is Right. 3 months ago:
While this is a popular sentiment, it is not true, nor will it ever be true.
AI (LLMs & agents in the coding context, in this case) can serve as both a tool and a crutch. Those who learn to master the tools will gain benefit from them, without detracting from their own skill. Those who use them as a crutch will lose (or never gain) their own skills.
Some skills will in turn become irrelevent in day-to-day life (as is always the case with new tech), and we will adapt in turn.
- Comment on I Went All-In on AI. The MIT Study Is Right. 3 months ago:
Indeed… Throw-away code is currently where AI coding excels. And that is cool and useful - creating one off scripts, self-contained modules automating boilerplate, etc.
You can’t quite use it the same way for complex existing code bases though… Not yet, at least…
- Comment on Dead mosquito proboscis used for high-resolution 3D printing nozzle 3 months ago:
Interesting fact: You can use an elephant’s trunk as a low-resolution 3D printing nozzle
- Comment on Microsoft says Copilot will 'finish your code before you finish your coffee' adding fuel to the Windows 11 AI controversy that's still raging 4 months ago:
Hah, yeah. Vibe coding and prompt engineering seem like a huge fad right now, although I don’t think it’s going to die out, just the hype.
The most successful vibe projects in the next few years are likely to be the least innovative technically, following well trodden paths (and generating lots of throwaway code).
I suppose we’ll see more and more curated collections of AI-friendly design documents and best-practice code samples to enable vibe coding for varied use-cases, and this will be the perceived value add for various tools in the short term. The spec driven development trend seems to have value, adding semantic layers for humans and AI alike.
- Comment on Microsoft says Copilot will 'finish your code before you finish your coffee' adding fuel to the Windows 11 AI controversy that's still raging 4 months ago:
Yeah - there’s definitely a GIGO factor. Throwing it at a undocumented codebase with poor and inconsistent function & variable names isn’t likely to yield great revelations. But it can probably still tell you why changing input X didn’t result in a change to output Y (with 50k lines of code in-between), saving you a bunch of debugging time.
- Comment on Microsoft says Copilot will 'finish your code before you finish your coffee' adding fuel to the Windows 11 AI controversy that's still raging 4 months ago:
Most code on the planet is boring legacy code, though. Novel and interesting is typically a small fraction of a codebase, and it will often be more in the design than the code itself. Anything that can help us make boring code more digestible is welcome. Plenty of other pitfalls along the way though.