I think these guys forget that ai is just a program written by drumroll please HUMANS. Sure we could shitcan every programmer and replace them with “vibe coders” and skate by for a year or two but when bugs crop up and backend issues pile up AI is not gonna unfuck the mess they created and it will require human intervention. If these pricks do away with the technical folk well get to that point and suffer a technological collapse because everybody that knew how to code fled or changed careers so they could pay rent.
Comment on CEO of Palantir Says AI Means You’ll Have to Work With Your Hands Like a Peasant
OR3X@lemmy.world 2 days ago
These morons really think AI is going to allow them to replace the technical folks. The same technical folks they severely loathe because they’re the ones with the skills to build the bullshit they dream up, and as such demand a higher salary. They’re so fucking greedy that they are just DYING to cut these people out in order to make more profits. They have such inflated egos and so little understanding of the actual technology they really thing they’re just going to be able to use AI to replace technical minds going forward. We’re on the precipice of a very funny “find out” moment for some of these morons.
innermachine@lemmy.world 2 days ago
wonderingwanderer@sopuli.xyz 2 days ago
The FOSS ecosystem will be fine, because it’s maintained by people who do it for the love of the craft.
Pika@sh.itjust.works 2 days ago
The scary part is how it already somewhat is.
My friend is currently job hunting because they added AI to their flow and it does everything past the initial issue report.
the flow is now: issue logged -> AI formats and tags the issue -> AI makes the patch -> AI tests the patch and throws it back if it doesn’t work -> AI lints the final product once working -> AI submits the patch as pull.
Their job has been downscaled from being the one to organize, assign and work on code to an over-glorified code auditor who looks at pull requests and says “yes this is good” or “no send this back in”
PrejudicedKettle@lemmy.world 2 days ago
I feel like so much LLM-generated code is bound to deteriorate code quality and blow up the context size to such an extent that the LLM is eventually gonna become paralyzed
Pika@sh.itjust.works 2 days ago
I do agree, LLM generated code is inaccurate, which is why they have to have the throw it back in stage and a human eye looking at it.
They told me their main concern is that they aren’t sure they are going to properly understand the code the AI is spitting out to be able to properly audit it (which is fair), then of course any issue with the code will fall on them since it’s their job to give final say of “yes this is good”
WanderingThoughts@europe.pub 2 days ago
At that point your m they’re just the responsibility circuit breaker, put there to get the blame if things go wrong.
ns1@feddit.uk 2 days ago
It would be interesting to know where your friend works and what kind of application it’s on, because your comment is the first time I’ve ever heard of this level of automation. Not saying it can’t be done, just skeptical of how well it would work in practice.
Pika@sh.itjust.works 1 day ago
That was my general thought process prior to them telling me how the system worked as well. I had seen claude workflows which does similar, but to that level I had not seen before. It was an eye opener.
dreamkeeper@literature.cafe 2 days ago
There’s absolutely no way this can be effective for anything other than simple changes in each PR.
Pika@sh.itjust.works 1 day ago
I’ll have to ask them how effective it is now that its been deployed for a bit. I wouldn’t expect so either based off how I’ve seen open sourced projects using stuff like that, but they also haven’t been complaining about it screwing up at all.
dreamkeeper@literature.cafe 21 hours ago
I found out that some teams at my company are doing the same thing. They’re using it to fix simple issues like exceptions and security issues that don’t need many code changes. I’d be shocked if it were any different at your friend’s company. It’s just surprising to me that that’s all he was doing?
LLMs can be very effective but if I’m writing complex code with them, they always require multiple rounds of iteration. They just can’t retain enough context or maintain it accurately without making mistakes.
I think some clever context engineering can help with that, but at the end of the day it’s a known limitation of LLMs. They’re really good at doing text-based things faster than we can, but the human brain just has an absolutely enormous capacity for storing information.
Canconda@lemmy.ca 2 days ago
-
The rich fully intend to replace workers with slaves one way or another.
-
AI robots can be utter shit and they will still be leaps and bounds above the task specific automation that has been replacing human workers for decades.
-
As long as the rich maintain their monopoly qualify of service can drop indefinitely. Doesn’t matter if AI robots suck ass when no human employed company can compete and every other option is just as ass.
partial_accumen@lemmy.world 2 days ago
AI robots can be utter shit and they will still be leaps and bounds more efficient than the task specific automation that has been replacing human workers for decades.
I disagree with this, and we already have live examples today that are good analogs. Youtube is being flooded with AI generated slop. AI generated scripts, read by AI generated voices, over top of AI generated images.
I never seek these out, and actively avoid them when I can tell what they are before clicking on them. In that first 2 seconds of AI generated voice, I can tell this is slop and stop watching it seeking a human generated video instead.
As long as the rich maintain their monopolies quality of service can drop indefinitely. Doesn’t matter if AI robots suck ass when no human employed company can compete and every other option is just as ass.
It can’t. At some point the quality of the product drops to a level it is no longer a product. Lets say we’re in your theoretical dystopian future where the monopoly exists for cookies. There is no other place to buy cookies except from the monopoly. You posit that quality can drop indefinitely as there is zero alternative sources for cookies. So lets say the monopoly cookie brand was deciding to substitute some of the wheat flower with sawdust as a cost saving measure with the consequence being yet lower quality cookies. At a tiny fraction of sawdust you may notice it, but the sawdust cookie may still be better than no cookie. The monopoly continues to increase the sawdust content until the cookie contains zero wheat flour and is entirely substituted with sawdust. I believe even you would concede you would no longer buy the sawdust cookies at this point. Further, you would have stopped buying them earlier when the sawdust content became so high that the cookie was inedible to you even though it contained some wheat flour at that point.
This same thing will apply to Youtube. If the only thing left to watch on youtube is AI slop because no human creators exist, then there is no point in watching youtube anymore.
The point here, is that even with a monopoly on a product, as soon as the quality drops below a certain threshold (and this point is different for every consumer), the product stops being a product to them.
Canconda@lemmy.ca 2 days ago
And yet youtube is still the dominant video host.
You’re missing the point.
partial_accumen@lemmy.world 2 days ago
And yet youtube is still the dominant video host.
Youtube hasn’t descended to being unusable yet.
You’re missing the point entirely. If instead of luxuries you look through the lens of necessities perhaps you’ll see. Like replace cookies with bread and try tell me people will choose to starve first. Like obviously not.
I think you’re missing the point. If we substitute bread in the example I gave and they’re putting sawdust in it, then yes people will not buy bread made with zero flour, but instead made with sawdust. Yes, people will stop buying bread in that situation because they would die anyway because the bread doesn’t produce nutritional value.
Ask a ford employee 30 years ago about robot automation. Like this is not a new thing in the 2020s. The rich have a playbook for this.
Now you’re speaking against your original point. Robot automation has not lowered the quality of a Ford vehicle. If anything it has increased it. A robot can have assembly tolerances much tighter than a human. Where is the lowering of quality from a robot making the vehicle that your original thesis demands?
phutatorius@lemmy.zip 1 day ago
In that first 2 seconds of AI generated voice, I can tell this is slop and stop watching it seeking a human generated video instead.
Report that crap, every time. I’s a plague.
-
BarneyPiccolo@lemmy.today 2 days ago
They don’t even dream it up any more. They hire brains, sift through their ideas, and say “I like that. Do that.”
After that, they are experts in manipulating finances to makes their companies rich, and themselves richer, by paying the people who actually do the work, make the money, and create the shareholder value, as little as possible.
wonderingwanderer@sopuli.xyz 2 days ago
That’s called “appropriating the surplus value of labor”
BarneyPiccolo@lemmy.today 1 day ago
I like that, explains the situation succinctly.
wonderingwanderer@sopuli.xyz 1 day ago
I can’t take credit for it. I believed the man who coined the term was named Carl something.
Or maybe he spelled his name with a K… Karl, Marquis? Marcus? Marquette? Something like that…
5too@lemmy.world 2 days ago
At this point, I question whether they’re even experts in that kind of finance, or if they’re just connected to each other well enough, and have a few willing experts in hand, to maintain their position.
I honestly think the only thing most of them have going for them is that it’s their name on the accounts.
segabased@lemmy.zip 2 days ago
Not just the high payed software folks, but the data centers are also maintained by highly skilled and hard working techs. And this technology is only possible with constant pristine maintenance if the servers to train their models. They loathe these people just as much and can’t wait to get humans out of the process
dukemirage@lemmy.world 2 days ago
This specific moron was talking about people with a humanities degree.
yeahiknow3@lemmy.dbzer0.com 2 days ago
Even less plausible. There was a paper published recently arguing that by design LLMs are quite literally incapable of creativity. These predictive statistical models represent averages. They always and only generate the most banal outputs. That’s what makes them useful.
dukemirage@lemmy.world 2 days ago
Well, every academic field needs creativity. But it’s nothing new that people from economic or tech bubbles have a disdain for humanities.
phutatorius@lemmy.zip 1 day ago
The degree of randomness in generative models is not necessarily fixed, it can at least potentially be tunable. I’ve built special-purpose generative models that work that way (not LLMs, another application). More entropy in the model can increase the likelihood of excursions from the mean and surprising outcomes, though at greater risk of overall error.
There’s a broader debate to be had about how much that has to do with creativity, but if you think divergence from the mean is part of it, that’s within LLM capabilities.
yeahiknow3@lemmy.dbzer0.com 1 day ago
That’s a good point. The problem is that LLM’s are calibrated for prediction. Their randomness is tweaked for efficacy. Forcing them to be more chaotic just makes them much less effective. This inherent tension is why they’re mathematically incapable of any sort of consistent creativity.
UnderpantsWeevil@lemmy.world 2 days ago
A healthy chuck of CEOs have a humanities degree. It’s a common undergrad before moving to B and J-School.