MangoCats
@MangoCats@feddit.it
- Comment on What would stop you from switching to a flip phone (or dumbphone) in 2025? 10 hours ago:
This notion that healthy adults need mental herding is very pervasive
Need is a strong word, but it is very true that the environment you put people in will influence their behavior. Grocery stores filled with attractively packaged highly processed foods will drive more highly processed food consumption than if you had to show proof of age ID and sign a disclosure before being allowed into the back room to buy those same foods in plain brown wrapper containers blazoned with all the health warnings that apply to them.
Handheld screen tech delivers dopamine release as powerful as most recreational drugs / experiences. People are definitely “herded” by how that tech is delivered, default settings that most of them never take the time to learn how to change, other settings that annoyingly constantly reset themselves to undesired PAY ATTENTION TO ME configurations, etc.
So, yeah, mindfulness of how your devices are shaping your behavior is a “higher level of awareness” that we as a society should be collectively trying to attain.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 1 day ago:
If you’re talking about India / China working for US firms, it’s supply and demand again.
It’s clearly not. Otherwise, we wouldn’t have a software guy left standing inside the US.
India / China can do a lot of things. For my company, they’re very strong in terms of producing products for their domestic market. They’re not super helpful per-capita on the US market oriented tasks, but they’re cheap - so we try to use them where we can.
There’s not a lot of good US software employees standing around unemployed… A lot of what I have interviewed as “available” are not even as good as what we get from India, but we have a house full of good developers already.
That’s just a bad business.
While I might reflexively agree, you have to ask yourself: from what perspective? Their customers may not be the happiest with the quality of the product, but for some reason they keep buying it and the business keeps expanding and making more and more profit as the years go by… in my book that’s a better business than the upstanding shop I worked for for 12 years that eventually went bust because we put too much effort into making good stuff through hiring good people to make it, and not enough effort into selling the stuff so we could continue to operate.
- Comment on What would stop you from switching to a flip phone (or dumbphone) in 2025? 1 day ago:
Same. My “smartphone” usage is about 10% phone, 10% SMS service, 10% camera, 5% flashlight, 10% GPS + Map tool, 15% e-mail, and 40% web browser… I carried a pretty capable flip phone from 2006-2013, the things I liked best about it were its longevity and its long battery life (up to a week on standby, 3-4 days even with normal usage.) However, even upgraded with GPS capability, the small screen would have made for a poor map experience, and e-mail and web browser were just out of its practical reach.
Stop browsing social media, maybe install Tor if you want that level of privacy - Smartphones can do that…
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 1 day ago:
You’ll need to explain why all the overseas contractors are getting paid so much less, in that case.
If you’re talking about India / China working for US firms, it’s supply and demand again. Indian and Chinese contractors provide a certain kind of value, while domestic US direct employees provide a different kind of value - as you say: ease of communication, time zone, etc. The Indians and Chinese have very high supply numbers, if they ask for more salary they’ll just be passed over for equivalent people who will do it for less. US software engineers with decades of experience are in shorter supply, and higher demand by many US firms, so…
Of course there’s also a huge amount of inertia in the system, which I believe is a very good thing for stability.
But then the boom busted and those salaries deflated down to the $50k range.
And that was a very uneven thing, but yes: starting salaries on the open market did deflate after .com busted. Luckly, I was in a niche where most engineers were retained after the boom and inertia kept our salaries high.
$200K for remedial code cleanup should be a transient phenomenon, when national median household income hovers around $50-60K. With good architecture and specification development, AI can do your remedial code cleanup now, but you need that architecture and specification skill…
I’ve watched businesses lose clients - I even watched a client go bankrupt - from bad coding decisions.
I interviewed with a shop in a University town that had a mean 6 month turnover rate for programmers, and they paid the fresh-out of school kids about 1/3 my previous salary. We were exploring the idea of me working for them for 1/2 my previous salary, basically until I found a better fit. Ultimately they decided not to hire me with the stated reason not being that my salary demands were too high, but that I’d just find something better and leave them. Well… my “find a new job in this town” period runs 3-6 months even when I have no job at all, how can you lose anything when you burn through new programmers every 6 months or less? I believe the real answer was that they were afraid I might break their culture, start retaining programmers and building up a sustained team like in the places I came from, and they were making plenty of money doing things the way they had been doing them for 10 years so far…
it’s a dangerous game I see a few other businesses executing without caution or comparable results.
From my perspective, I can do what needs doing without AI. Our whole team can, and nobody is downsizing us or demanding accelerated schedules. We are getting demands to keep the schedules the same while all kinds of new data privacy and cybersecurity documentation demands are being piled on top. We’re even getting teams in India who are allegedly helping us to fulfill those new demands, and I suppose when the paperwork in those areas is less than perfect we can “retrain” India instead of bringing the pain home here. Meanwhile, if AI can help to accelerate our normal work, there’s plenty of opportunity for exploratory development of new concepts that’s both more fun for the team and potentially profitable for the company. If AI turns out to be a bust, most engineers on this core team have been supporting similar products for 10-20 years… we handled it without AI before…
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 1 day ago:
If you can’t hold on to them once they have experience, that’s a you problem.
I work at a large multi-national corp with competitive salaries, benefits, excellent working conditions, advancement opportunities, etc. I still have watched promising junior engineers hit the door just when they were starting to be truly valuable contributors.
you can have a shitty AI that will never grow beyond a ‘new hire.’
So, my perspective on this is that : over the past 12 months, AI has advanced more quickly than all the interns and new hires I have worked with over the past 3 decades. It may plateau here in a few months, even if it does it’s already better than half of the 2 year experienced software engineers I have worked with, at least at writing code based on natural language specs provided to it.
The future problem, though, is that without the experience of being a junior dev, where do you think senior devs come from?
And I absolutely agree, the junior dev pipeline needs to stay full, because writing code is less than half of the job. Knowing what code needs writing is a huge part of it, crafting implementable and testable requirements, learning the business and what is important to the business, that has always been more than half of my job when I had the title “Software Engineer”.
the world suffocated under the energy requirements of doing everything poorly.
While I sympathize, the energy argument is a pretty big red herring. What’s the energy cost of a human software engineer? They have a home that has to be built, maintained, powered, etc. Same for their transportation which is often a privately owned automobile, driving on roads that have to be built and maintained. They have to eat, they need air conditioning, medical care, dental care, clothes, they have children who need to spend 20 years in school, they take vacations on cruise ships or involving trans-oceanic jet travel… add up all that energy and divide it by their productive output writing code for their work… if AI starts helping them write that code even 2x faster, the energy consumed by AI is going to be trivial compared to the energy consumed by the software engineer per unit of code produced, even if producing code is only 20% of their total job.
I would say the same goes for Doctors, Teachers, Politicians, etc. AI is not going to replace 100% of any job, but it may be dramatically accelerating 30% or more of many of them, and that increase in productivity / efficiency / accuracy is going to pay off in terms of fewer ProfessionX required to meet demands and/or ProfessionX simply serving the world better than they used to.
My sister in law was a medical transcriptionist - made good money, for a while. Then doctors replaced her with automatic transcriptionists, essentially the doctors quit outsourcing their typing work to humans and started trusting machines to do it for them. All in all, the doctors are actually doing more work now than they did before when they had human transcriptionists they could trust, because now they are have the AI transcription that they need to check more closely for mistakes than they did their human transcriptionists, but the cost differential is just too big to ignore. That’s a job that was “eliminated” by automation, at least 90% or more in the last 20 years. But, it was really a “doctor accessory” job, we still have doctors, even though they are using AI assistants now…
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 2 days ago:
Agreed… however:
The theory is that the new hire gets better over time as they learn the ins and outs of your business and your workplace style.
The practice is that over half of them move on to “other opportunities” within a couple of years, even if you give them good salary, benefits and working conditions.
And they’re commanding an $80k/year salary because they need to live in a country that demands an $80k/year cost of living
Not in the US. In the US they’re commanding $80k/yr because of supply and demand, it has very little to do with cost of living. I suppose when you get supply so high / demand so low, you eventually hit a floor where cost of living comes into play, but in many high supply / low demand fields that doesn’t happen until $30k/yr or even lower… Case in point: starting salaries for engineers in the U.S. were around $30-40k/yr up until the .com boom, at which point software engineering capable college graduates ramped up to $70k/yr in less than a year, due to demand outstripping supply.
stuffing your codebase with janky nonsense
Our codebase had plenty of janky nonsense before AI came around. Just ask anyone: their code is great, but everyone else’s code is a bunch of janky nonsense. I actually have some hope that AI generated code may improve to a point where it becomes at least more intelligible to everyone than those other programmers’ janky nonsense. In the past few months I have actually seen Anthropic/Claude’s code output improve significantly toward this goal.
Long term, its a death sentence.
Definitely is, the pipeline should continue to be filled and dismissing seasoned talent is a mistake. However, I suspect everyone in the pipeline would benefit from learning to work with the new tools, at least the “new tools” in a year or so, the stuff I saw coming out of AI a year ago? Not really worthwhile at that time, but today it is showing promise - at least at the microservice level.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 2 days ago:
Yes, this is the cost of training, and it is high, but also necessary if you are going to maintain a high level of capability in house.
Management loves the idea of outsourcing, my experience of outsourcing is that the ultimate costs are far higher than in house training.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 3 days ago:
while firing the rest of their dev team
That’s the complete mistake right there. AI can help code, it can’t replace the organizational knowledge your team has developed.
Some shops may think they don’t have/need organizational knowledge, but they all do. That’s one big reason why new hires take so long to start being productive.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 3 days ago:
Same, and AI isn’t as frustrating to deal with when it can’t do what it was hired for and your manager needs you to now find something it can do because the contract is funded…
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 3 days ago:
I have taken to drafting a complete requirements document and including it with my requests - for the very reasons you state. it seems to help.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 3 days ago:
I am a firm believer in rubber ducky debugging, but AI is clearly better than the rubber duck. You don’t depend on either to do it for you, but as long as you have enough self-esteem to tell AI to stick it where the sun don’t shine when you know it’s wrong, it can help accelerate small tasks from a few hours down to a few minutes.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 3 days ago:
Trusting any new code blindly is foolish, even if you’re paying a senior dev $200K/yr for it, it should be reviewed and understood by other team members before accepting it. Same is true for an LLM, but of course most organizations never do real code reviews in either scenario…
20ish years ago, I was a proponent of pair programming. It’s not for everyone. It’s not for anyone 40 hours a week, but in appropriate circumstances for a few hours at a session it can be hugely beneficial. It’s like a real-time code review during development. I see that pair programming is as popular today as it was back then, maybe even less so, but… “Vibe coding” with LLMs in chat mode? That can be a very similar experience, up to a point.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 3 days ago:
We looked at the code produced and determined that it’s of the quality of a new hire.
As someone who did new hire training for about five years, this is not what I’d call promising.
Agreed, however, the difference between a new hire who requires a desk and a parking space and a laptop and a lunch break and salary and benefits and is likely to “pursue other opportunities” after a few months or years, might turn around and sue the company for who knows what, and an AI assistant with a $20/mo subscription fee is enormous.
Would I be happy with new-hire code out of a $80K/yr headcount, did I have a choice?
If I get that same code, faster, for 1% of the cost?
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 3 days ago:
Human coder here. First problem: define what is “writing code.” Well over 90% of software engineers I have worked with “write their own code” - but that’s typically less (often far less) than 50% of the value they provide to their organization. They also coordinate their interfaces with other software engineers, capture customer requirements in testable form, and above all else: negotiate system architecture with their colleagues to build large working systems.
So, AI has written 90% of the code I have produced in the past month. I tend to throw away more AI code than the code I used to write by hand, mostly because it’s a low-cost thing to do. I wish I had the luxury of time to throw away code like that in the past and start over. What AI hasn’t done is put together working systems of any value - it makes nice little microservices. If you architect your system as a bunch of cooperating microservices, AI can be a strong contributor on your team. If you expect AI to get any kind of “big picture” and implement it down to the source code level - your “big picture” had better be pretty small - nothing I have ever launched as a commercially viable product has been that small.
Writing code / being a software engineer isn’t like being a bricklayer. Yes, AI is laying 90% of our bricks today, but it’s not showing signs of being capable of designing the buildings, or even evaluating structural integrity of something taller than maybe 2 floors.
- Comment on 3 days ago:
We’ve used the Google AI speakers in the house for years, they make all kinds of hilarious mistakes. They also are pretty convenient and reliable for setting and executing alarms like “7AM weekdays”, and home automation commands like “all lights off”. But otherwise, it’s hit and miss and very frustrating when they push an update that breaks things that used to work.
- Comment on 3 days ago:
I think I’d at least use an OCR program to do the bulk of the typing for me…
- Comment on 3 days ago:
Though one thing I have to say: I’m very annoyed by it’s constant agreeing with what I say, and enabling me when I’m doing dumb shit. I wish it would challenge me more and tell me when I’m an idiot.
There’s a balance to be had there, too… I have been comparing a few AI engines to compare their code generation capabilities. If you want an exercise in frustration, try to make an old school keypress driven application on a modern line-oriented terminal interface while still using the terminal for standard text output. I got pretty far with Claude, then my daily time limits were kicking in. Claude did all that “you’re so right” ego stroking garbage, but also got me near to a satisfactory solution. Then I moved into Google AI and it started out with reading my the “you just can’t do that, it won’t work” doom and gloom it got from some downer stack overflow or similar material. Finally, I showed Google my code that was already doing what it was calling impossible and it started helping me to polish the remaining rough spots. But, if you believed its first line answers you’d walk away thinking that something relatively simple was simply impossible.
Lately, I have taken to writing my instructions in a requirements document instead of relying so much on interactive mode. It’s not a perfect approach, but it seems to be much more stable for “larger” projects where you hit the chat length limits and have to start over with the existing code - what you’ve captured in requirements tends to stick around better than just using the existing code as a starting point of how things should be then adding/modifying from there. Ideally, I’d like it if the engine could just take my requirements document and make the app from that, but Claude still seems to struggle when total LOC gets into the 2000-5000 range for a 200-ish lines requirement spec.
- Comment on 4 days ago:
Con-ned-di-cut
- Comment on 4 days ago:
Of course, when the question asks “contains the letter _” you might think an intelligent algorithm would get off its tokens and do a little letter by letter analysis. Related: ChatGPT is really bad at chess, but there are plenty of algorithms that are super-human good at it.
- Comment on 4 days ago:
Bubbles and crashes aren’t a bug in the financial markets, they’re a feature. There are whole legions of investors and analysts who depend on them.
- Comment on 4 days ago:
If you want to get irate about energy usage, shut off your HVAC and open the windows.
- Comment on 4 days ago:
AI writes code for me. It makes dumbass mistakes that compilers automatically catch. It takes three or four rounds to correct a lot of random problems that crop up. Above all else, it’s got limited capacity - projects beyond a couple thousand lines of code have to be carefully structured and spoonfed to it - a lot like working with junior developers. However: it’s significantly faster than Googling for the information needed to write the code like I have been doing for the last 20 years, it does produce good sample code (if you give it good prompts), and it’s way less frustrating and slow to work with than a room full of junior developers.
That’s not saying we fire the junior developers, just that their learning specializations will probably be very different from the ones I was learning 20 years ago, just as those were very different than the ones programmers used 40 and 60 years ago.
- Comment on "Very dramatic shift" - Linus Tech Tips opens up about the channel's declining viewership 1 week ago:
And with transparency greed loses some of its advantage, we should be eroding those advantages any way we can…
- Comment on "Very dramatic shift" - Linus Tech Tips opens up about the channel's declining viewership 1 week ago:
This is where personalization comes in, if everybody can tune the algorithm to their liking with sufficient individuality, then algorithm gamers have a much more diffuse target. Also, if you’re getting targeted by abusers you don’t want to see, you can already filter that to some degree but it should be made even easier to “turn down the volume” on abusive groups. Abusive being in the opinion of the abused.
- Comment on "Very dramatic shift" - Linus Tech Tips opens up about the channel's declining viewership 1 week ago:
What we, as users, desere is transparency in the algorithm and significant input into how it works for us. Do you like big channels, small channels? etc. The problem is when people opt-out of sponsored content, but also refuse to pay. Transparency in the cost of delivery of service and the income from advertising would help there too, except if the service provider is wanting obscene profits.
- Comment on SpaceX says states should dump fiber plans, give all grant money to Starlink 3 weeks ago:
I believe Florida’s recent build-out of utility scale natural gas plants is driven, in part, by their ability to ramp up and down virtually instantly.
However, the linked story is about a residential neighborhood where lots of homeowners installed individual natural gas powered generators for their homes. Then, when the public grid failed in a hurricane, they all switched on their “whole home, natural gas powered” generators at once for the first time and the natural gas supply to the neighborhood was nowhere near up to the task of delivering all that fuel at that rate.
- Comment on SpaceX says states should dump fiber plans, give all grant money to Starlink 3 weeks ago:
Read this quick before the people selling generators get it buried: wtsp.com/…/67-144d70da-bb27-496c-8928-ab7e61a53b0…
The gas company finally figured out how to deflect their responsibility in the matter: they say that the generator owners “didn’t register” their generators, but… now that it has been a year, has the gas company done anything to improve service capacity?
Anyway: the tie-in with Starlink is, anything like this works great until everybody tries to use it all at once at high capacity. When all 53,000 residents of Grand Island Nebraska decide to stream different high definition videos all at once? A good fiber system can handle that, Starlink? I’m curious…
- Comment on SpaceX says states should dump fiber plans, give all grant money to Starlink 3 weeks ago:
Time Warner and Comcast need to have all that grant money clawed back. They contracted with the taxpayers to deliver a service and they didn’t even make a good faith effort to start.
- Comment on SpaceX says states should dump fiber plans, give all grant money to Starlink 3 weeks ago:
Seriously, this is in the “well, we know you want all the free money you can get, but: no. Now go do your thing on your own dime.”
Fiber in the ground is infrastructure like paved roads. Satellites? One counter-orbiting frag bomb can take out a satellite constellation in less than a day.
- Comment on ChatGPT 5 power consumption could be as much as eight times higher than GPT 4 — research institute estimates medium-sized GPT-5 response can consume up to 40 watt-hours of electricity 3 weeks ago:
Me, personally, we have trees and shade. So many subdivisions don’t, and they have dark colored roofs, and then homeowners do bone-headed things like adding “sun rooms” - lots of those in Houston.
We get upset when our electric bill passes $300 for the month, but our neighbors with the 3500 sq ft? They never see it under $400.