I think the fad will die down a bit, when companies figure out that AI makes very expensive mistakes that the company has to compensate, and saying it was the AI is not a valid cop out.
I foresee companies will go bankrupt on that account.
Senate report says AI will take 97M US jobs in the next 10 years, but those numbers come from ChatGPT
Submitted 3 weeks ago by uszo165@futurology.today to technology@lemmy.world
https://www.theregister.com/2025/10/06/ai_job_losses_us_senate_report/
Comments
Buffalox@lemmy.world 3 weeks ago
shalafi@lemmy.world 3 weeks ago
When the bubble bursts, whoever is left standing is going to have to jack prices through the roof to put so much as a dent in their outlay. Their outlay so far. Can’t see many companies hanging in there at that point.
BanMe@lemmy.world 3 weeks ago
Not if the IP is purchased by another company leaving the original saddled with the debt, or spun off so the parent company can rebuy it thusly, or the government bails them out, or buys it to be the State AI too, or a bunch of other scenarios in this dark new world ahead.
a4ng3l@lemmy.world 3 weeks ago
I put my money on AI act here in Europe and the willingness of local authorities to make a few examples. That would help bringing some accountability here and there and stir a bit the pot. Eventually, as AI commodities, it will be less in the light. That will also help.
jabjoe@feddit.uk 3 weeks ago
Good podcast about this bubble bursting : craphound.com/…/the-real-economic-ai-apocalypse-i…
humanspiral@lemmy.ca 3 weeks ago
OpenAI is pets.com. It has fairly crappy models in a very strong competition for models. The only difference with pets.com is that US government is behind it to make Skynet for Israel’s control. Datacenters are meant to develop Skynet. The only pretense of economic strength in US is datacenter economy, and Skynet for Israel is absolute mission for US government.
Despite no possible business economics for datacenter model, the sheer will behind Skynet for Israel ensures that there is no imminent bubble pop. Perplexity and Coreweave may get sacrificed though.
Still GPUs and specialized AI GPUs are here to stay, even if sales forecasts can be too high. Open weight models are awesome. Smaller models can be trained after quantization to domain specialization, with hardware for small enough models accessible to individuals and businesses. The fatal flaw of using datacenter providers is that their purpose is to provide Skynet for Israel, and steal any information that might help in the process, and then terminate/genocide anyone who would stand in their way.
simplejack@lemmy.world 3 weeks ago
Agreed, but I do think that some jobs are just going to be gone.
For example, low level CS agents. I worked for a company that replaced that first line of CS defense with a bot, and the end-of-call customer satisfaction scores went up.
I can think of a few other things in my company that had a similar outcome. If the role is gone, and the customers and employees are being served even better than when they had that support role, that role ain’t coming back.
Buffalox@lemmy.world 3 weeks ago
I’m pretty sure that even consumer services is an area where I saw a computer made an expensive mistake, promising the customer something very expensive, and a court decided the company had to honor the agreement the AI made. But I can’t find the story, because I’m flooded with product placement articles about how wonderful AI is at saving cost in CS.
But yes CS is absolutely an area where AI is massively pushed.architect@thelemmy.club 3 weeks ago
Oh 100%. The question will be are there more opportunities that come from it. Here’s my guess: if you can’t produce something interesting you will be fighting for scraps. Even that might not be good enough.
jaybone@lemmy.zip 3 weeks ago
LOLLLLLLLL that’s like a third of the US population. Probably half of the number currently employed. There’s no way in hell this useless garbage will take 1/3 to 1/2 of all jobs. Companies that do this will go out of business fast.
skisnow@lemmy.ca 3 weeks ago
You can tell how competent someone is at something by how good they think AI is at that thing.
pinball_wizard@lemmy.zip 3 weeks ago
You can tell how competent someone is at something by how good they think AI is at that thing.
This is so true.
I recently had a colleague - ignorant of this perspective - give a training presentation on using AI to update a kind of bullshit job useless document.
Dozens of peers attended their presentation. They went on demonstrating relatively mindless prompt inputting for 40 minutes.
I keep remembering just how many people they shared their AI enthusiasm with.
I think they may honestly believe that AI has democratized the workplace, and that they will vibe code their way to successful startup CEO-ship in a year.
TankovayaDiviziya@lemmy.world 3 weeks ago
And these 1/3 are perfect horde for fascist brainwashing and consolidate the power of techno-fascists. The fascists will tell the jobless that immigrants took their jobs and not robots.
Zephorah@discuss.online 3 weeks ago
Thus demonstrating the crux of the issue.
I was just looking for a name of a historical figure associated with the Declaration of Independence but not involved in the writing of it. Elizabeth Powel. Once I knew the name, I went through the ai to see how fast they’d get it. Duck.ai confidently gave me 9 different names, including people who were born on 1776 or soon thereafter and could not have been historically involved in any of it. I even said not married to any of the writers and kept getting Abagail Adams and the journalist, Goddard. It was continually distracted by “prominent woman” and would give Elizabeth Cady Stanton instead. Twice.
Finally, I gave the ai a portrait. It took the ai three tries to get the name from the portrait, and the portrait is the most used one under the images tab.
It was very sad. I strongly encourage everyone to test the ai. Easy to grab wikis that would be top of the search anyway are making the ai look good.
merc@sh.itjust.works 3 weeks ago
If you understand how LLMs work, that’s not surprising.
LLMs generate a sequence of words that makes sense in that context. It’s trained on trillions(?) of words from books, Wikipedia, etc. In most of the training material, when someone asks “what’s the name of the person who did X?” there’s an answer, and that answer isn’t “I have no fucking clue”.
Now, if it were trained on a whole new corpus of data that had “I have no fucking clue” a lot more often, it would see that as a reasonable thing to print sometimes so you’d get that answer a lot more often. However, it doesn’t actually understand anything. It just generates sequences of believable words. So, it wouldn’t generate “I have no fucking clue” when it doesn’t know, it would just generate it occasionally when it seemed like it was an appropriate time. So, you’d ask “Who was the first president of the USA?” and it would sometimes say “I have no fucking clue” because that’s sometimes what the training data says a response might look like when someone asks a question of that form.
Buffalox@lemmy.world 3 weeks ago
LOL Maybe AI will be the next big job creator. The AI solves a task super fast, but a human has to sort out the mistakes, and spend twice the time doing that, than it would have taken to just do it yourself.
DarkDarkHouse@lemmy.sdf.org 3 weeks ago
This what’s happening in computer programming. The booming subfield is apparently slop cleaners.
Grandwolf319@sh.itjust.works 3 weeks ago
If you have a job that you can be confidently wrong without any self awareness after the fact, then yeah I guess.
But I can’t think of many jobs like that except something that is mostly just politics.
Blackfeathr@lemmy.world 3 weeks ago
Don’t forget the vast majority of CEOs.
thisbenzingring@lemmy.sdf.org 3 weeks ago
IMO AI would probably do the job of CEO better than a human. It wouldn’t be as greedy and would be happy with any growth while being humble enough to make decisions that might be personally embarrassing
WanderingThoughts@europe.pub 3 weeks ago
Spam and astroturfing mostly.
Buffalox@lemmy.world 3 weeks ago
And over the next 50 years it will take 485 million jobs, and the unemployment rate will be 235%.
architect@thelemmy.club 3 weeks ago
And we’ll all be dead.
popekingjoe@lemmy.world 3 weeks ago
Here’s hoping!
thisbenzingring@lemmy.sdf.org 3 weeks ago
funny… i expected IT workers to be in that list but we’re not. AI couldn’t do my job but it could be my boss and that frightens me.
BanMe@lemmy.world 3 weeks ago
I drove Amazon Flex during Covid, having an AI as your boss is deeply and perpetually unsettling but ultimately doable! Just do what the push notification tells you to do. If you want to say something to your boss, use the feedback form on the corporate website. So simple.
explodicle@sh.itjust.works 3 weeks ago
Zink@programming.dev 3 weeks ago
Ooh, I have not read this and it sounds pretty good. Just got lost reading a couple pages of it. Thanks for the link!
thisbenzingring@lemmy.sdf.org 3 weeks ago
I’m thinking William Gibson probably gets it right with the Neuromancer story
sexy_peach@feddit.org 3 weeks ago
What do you do?
thisbenzingring@lemmy.sdf.org 3 weeks ago
what don’t I do… some days… I tell you. My job is Systems Administrator
MonkderVierte@lemmy.zip 3 weeks ago
Stop calling LLM AI. It suggests intelligence, which drives the bubble and makes people believe them false facts.
innermachine@lemmy.world 3 weeks ago
The fact that “AI” training off other LLM slop produces worse and worse results is proof there is no “intelligence” going on just clever parroting.
spicehoarder@lemmy.zip 2 weeks ago
LLMs are Mechanical Turks
luciferofastora@feddit.org 2 weeks ago
LLMs are the embodiment of “close enough”. They’re suitable if you want something resembling a certain mode of speech, formal tone or whatever without having to write it yourself.
When using it to train other LLMs, you’re basically training them to get “close enough” to “close enough”, with each generation getting a little further from “actually good” until, at some point, it’s just not longer close enough.
0x0@lemmy.dbzer0.com 2 weeks ago
It’s not a lie if you can obtain a trademark… our LLM is AI™. Just like how Teslas are Fully Self Driving™.
phutatorius@lemmy.zip 3 weeks ago
Just look at who’s in charge of the Senate, and ask yourself if they are to be trusted to do anything but lie, steal and carry out witch hunts.
As for LLMs, unless driving contact-centre customer satisfaction scores even further through the floor counts as an achievement, so far, all there’s been has been a vast volume of hype and wasted energy, and very little to show for it, except for some highly constrained point solutions which aren’t significant enough to make economic impact. Even then, the ROI is questionable.
ms_lane@lemmy.world 2 weeks ago
It hasn’t taken any jobs, but this will keep being repeated so it can be used as a bludgeon against pay rises and keeping up with inflation.
‘you’re lucky to have a job’
UnderpantsWeevil@lemmy.world 2 weeks ago
It hasn’t taken any jobs
Microsoft to cut up to 9,000 more jobs as it invests in AI
Hundreds of Google AI Workers Were Fired Amid Fight Over Working Conditions
Tesla’s layoffs hit Autopilot team as AI develops
A lot of these bozos are drinking their own Kool-aid. They’re laying off internal teams in droves and pivoting to “Vibes Coding” as a presumably more efficient method of internal devleopment.
ipkpjersi@lemmy.ml 2 weeks ago
I disagree, I have literally heard of people being laid off because managers think that AI can and will replace actual workers, I have literally seen it too.
quetzaldilla@lemmy.world 2 weeks ago
They are all firing and laying off labor in order to avoid paying wages, but that labor is not been done by AI-- it’s simply falling on those who are still employed or not getting done at all.
I resigned from an international public accounting firm due to having AI forced on very sensitive and delicate projects in order to lower costs. As a professional, every alarm bell went off and I left because I could be held liable for their terrible managerial decisions.
They told me they were sad to see me go, but AI is the future and hope I changed my mind-- this was all back in April.
Not only did AI fail to do a fraction of the work we were told it was going to do, it caused over $2MM in client damages that the firm then used to justify the firing of the remaining members of the projects’ team for failing to properly supervise the AI, even though every manager struggles to open a PDF.
AI is not the future because it is literally only capable of looking backwards.
AI is a performative regurgitation of information that real people put the time and energy into gathering, distilling, refining, and presenting to others to evaluate and contribute to.
Even worse, AI demonstrably makes its users dependent and intellectually lazy. If you think about it, the more prevalent AI usage becomes, the less and less capable people will be left to maintain it. And to all the fools crying out that AI will take care of itself or robots will, I say:
All LLMs are hallucinating and going psychotic, and that is not something that can be fixed due to the very nature of how LLMs work.
AI is not intelligent. And while it could be, that would take far too much energy and resources to make cost-effective machines with as many neural connections present in the brain of an average MAGA voter-- and that is already a super a low bar for most of us to clear.
drspawndisaster@sh.itjust.works 2 weeks ago
Yeah, it just hasn’t taken any of those jobs, apparently.
tidderuuf@lemmy.world 3 weeks ago
Knowing the way our country is going I would expect in the end workers will have to pay an AI tax on their income and most workers will start working 50 hours a week.
Buffalox@lemmy.world 3 weeks ago
I like you optimism that it won’t be worse than that. 😋
ZombieMantis@lemmy.world 2 weeks ago
This shit’s so embarrassing
Smoogs@lemmy.world 3 weeks ago
Yes but got forbid those jobs be stolen by another country. Can’t have that.
_stranger_@lemmy.world 3 weeks ago
finitebanjo@lemmy.world 3 weeks ago
My prediction is that AI will be responsible for 0 net jobs lost but simultaneously responsible for many companies going under.
Afaithfulnihilist@lemmy.dbzer0.com 2 weeks ago
People will lose their jobs to AI in the same way that lumberjacks lose their job to forest fires.
SabinStargem@lemmy.today 3 weeks ago
I don’t think the numbers themselves are that important, the key bit is that AI is an advancing technology over this century. If we don’t rework our society to account for an oncoming future, people will get run over.
If there is an overhaul of my nation’s Constitution, I would like economics to be addressed. One such thing would be a mechanical ruleset that adjusts the amount of wealth and assets a company can hold, according to employee headcount. If they downsize the amount of working humans, their limit goes down. They can opt to join a lotto program, that grants UBI to people whose occupation is displaced by AI, and each income that is lotto’ed by the company adds to their Capital Asset Limit.
HeyThisIsntTheYMCA@lemmy.world 3 weeks ago
One such thing would be a mechanical ruleset that adjusts the amount of wealth and assets a company can hold, according to employee headcount.
Expert here. That’s a bad idea. Example: a small law firm, 10 employees including owners/partners/I don’t care how they’re organized. They have 3 bank accounts: their payroll account, their operating fund (where all their nonpayroll expenditures are made) and their client liability account. None of the money in that account is actually theirs, they just hold it while waiting for clients to cash their settlement checks.
Proportionally, at least at the firm I’ve consulted with, their client liability account is several orders of magnitude larger than either of the other accounts. Technically the money isn’t theirs, they are just custodians, and the interest from that account is their bar association dues.
My point is, certain asset caps may look appropriate for one industry and simultaneously be absolutely disruptive to others.
survirtual@lemmy.world 3 weeks ago
What is it you’re an expert of, here? Game theory? Or do you mean you’re a lawyer?
If you’re a lawyer, you are not an expert on formulating a society. We’ve let lawyers run things for a long time and look at where it’s gotten us.
The system needs to promote positive, human centric outcomes. Maybe having clients with that much wealth isn’t fundamentally a positive outcome? Perhaps that idea needs to be reworked as a part of the oncoming changes?
In other words, anyone dealing with a certain threshold of wealth needs to hire human beings in order to raise their cap. I like this idea a lot actually. The bigger the clients, the more they have to pay if they want legal representation. For billionaires, legal representation would cost an absolute fortune and provide income to thousands of people.
Honestly I haven’t thought of this pattern but the more I think about it, the better it seems.
SabinStargem@lemmy.today 3 weeks ago
In that case, what would you believe to be an appropriate solution for your industry? I would like your viewpoint, it might refine my concept a bit further*.
*My approach is assuming a scenario that can be broadly be described as ‘What if FDR failed to save capitalism?’, or a total breakdown of the economic reality we know. That is the sort of thing that the Framers of America did when they made the Constitution. They formalized rules on preventing absolute political power, so I am looking for something similar regarding economic gaps.
phutatorius@lemmy.zip 3 weeks ago
As a more general principle, don’t build nitpicky implementation detail into a strategy document. That’s how you get brainfarts like the 3/5 compromise.
BruceAlrighty@lemmy.nz 3 weeks ago
“If there is a massive overhaul, I would like to use this once in a century event to enact minimal changes that will help to keep the capitalist system in place.”
normalexit@lemmy.world 3 weeks ago
If we can make it through to midterm elections I will worry then.
Tollana1234567@lemmy.today 3 weeks ago
even with the midterms, trump did so much damage, its only going to end up being blame on dems anyways. its better to intervene later, mainly having republican recieve the blame for once.
melfie@lemy.lol 2 weeks ago
AI isn’t taking the jobs, dipshit rich assholes are cutting the jobs. Taking a job implies doing the job, and from that perspective, the remaining people who weren’t laid off are taking the jobs, not AI.
humanspiral@lemmy.ca 3 weeks ago
robot tax
Needs to stop with stupid gimmicks from Bernie. Higher personal, corporate, and investment taxes to fund UBI. Welcome robots/automation to free us from any useless work instead of looking at cannibal solutions to “pick me” for the one job there is.
Robot taxes are wrongheaded, because automation is hard to define. Taxing pipes and wires will make full employment getting all your energy and water with buckets from the river and chopping down all the trees. Even if we strained to define narrow robots/automation categories, it would encourage more foreign production, and no local robot production economy. Why would those selling Yachts to the robot owners not be taxed?
IronBird@lemmy.world 3 weeks ago
we dont even have universal healthcare or functional public transit, UBI is a pipedream…
Tollana1234567@lemmy.today 3 weeks ago
or fast rail way, even in places that are funded better. public transports is lacking in many areas , for example it doesnt go areas outside of cities where most tech jobs/biotech campuses are without a car.
SugarCatDestroyer@lemmy.world 2 weeks ago
In fact, UBI is complete nonsense because you won’t be able to save for anything since your account will be reset every time, and you will again receive about 2000 in currency, which will all go to pay for basic needs and in the end you will become like a pet.
humanspiral@lemmy.ca 2 weeks ago
UBI is an easy winning election platform. The most promising aspect of UBI is that it is power redistribution. Wealth doesn’t get redistributed, and rich get richer even with higher taxes.
A UBI election platform has to be an anti-zionazi corruption platform. Everyone who pledges loyalty to Israel with a policy influence position is automatically a traitor, and will wait for their treason trial in military prison, citizenship removed, and their zionazi donors wealth must be zeroed out (confiscated by state). Defense and oil industries must be nationalized for their warmongering and zionazi influence. Zionazi media must be nationalized. AIPAC lobbyists and donors are traitors, not just foreign agents. ADL is a hate group.
we dont even have universal healthcare or functional public transit
Because of extreme political corruption. Money in politics and media tells you to never change that. US pays 5% of GDP more than Canada on healthcare, and a significant quality of life improvement for Americans comes from spending less overall, including not being subject to stress and crime that causes healthcare. Health lobbies, and all other oligarchies, allying with Zionazi donor wishes is an easier path for corruption than excluding zionazi influence to their own party. Andrew Yang’s 2016 book tour presidential run (focused on UBI/freedom dividends) started with including Universal healthcare, but like DNC, accepted fundraising to lose all principles. His attempt to form a centrist party/coalition is effectively a zionazi only political coalition.
We cannot have nice things because Israel supremacy and war has to be purpose of US government. Your misery makes you ignore the pure evil of US, because bandaids on your misery is all that gets politically debated. You can’t think of American or human sustainability if collapse is imminent. UBI is the complete extermination of the establishment corruption. UBI makes every program have a cash dividend alternative that makes it virtually impossible for corrupt filth to support wasteful programs.
On AI topic, “national security to beat China” = make Skynet to support Israel media/information control to diminish and oppress us all for oligarchy. The alternative to freedom dividends/UBI is genocide of the slave class that has resistance negatives, and no longer any useful slavery positives, to Israel/oligarchy.
sugar_in_your_tea@sh.itjust.works 3 weeks ago
We don’t necessarily need higher taxes, we could probably put an income cap on SS benefits, remove the cap on SS taxes, and fund it with the excess.
humanspiral@lemmy.ca 3 weeks ago
An extremely weird thing GOP did, but explained as Trump diaper lickers, is make SS tax free, even though they’ve been pressuring for SS diminishment reform in last 8 years. A very simple alternative that could have been done is taxes on SS income could flow back into SS fund to strengthen it.
Overall, increased taxes on investment income alone can pay for UBI.
tal@olio.cafe 3 weeks ago
I wouldn’t put it entirely outside the realm of possibility, but I think that that’s probably unlikely.
The entire US only has about 161 million people working at the moment In order for a 97 million shift to happen, you’d have to manage to transition most human-done work in the US to machines, using one particular technology, in 10 years.
Is that technically possible? I mean, theoretically.
I’m pretty sure that to do something like that, you’d need AGI. Then you’d need to build systems that leveraged it. Then you’d need to get it deployed.
What we have today is most-certainly not AGI. And I suspect that we’re still some ways from developing AGI. So we aren’t even at Step 1 on that three-part process, and I would not at all be surprised if AGI is a gradual development process, rather than a “Eureka” moment.
Kyle_The_G@lemmy.world 3 weeks ago
and then 115 million will be needed to unwind the half-assed implementation and inevitable damage.
enbiousenvy@lemmy.blahaj.zone 3 weeks ago
thats UE4 Manny lol
twinklefruit@lemmings.world 3 weeks ago
Good.
Having machines do the work for us is a good thing.
boonhet@sopuli.xyz 3 weeks ago
Yes, just kill the 96 million people because it’s not like the capitalists are ever going to share what they control and Americans are never going to vote for social safety nets. Not within the next 10 years anyway.
CosmoNova@lemmy.world 3 weeks ago
Why even post this here? This is politics BS that‘s used as a diversion from the Epstein files.
TronBronson@lemmy.world 3 weeks ago
The epstien files is a distraction from dismantling our constitutional law. What laws are you going to try the pedos under? Which courts do you plan on using? You see where I’m going with this? We all know who’s on the list who’s gonna hold them accountable? No one, thus it’s a stupid distraction.
IronBird@lemmy.world 3 weeks ago
there is no constitution and never was, they said all men created equal while watching slaves pick their food
Jaysyn@lemmy.world 3 weeks ago
I.e., made up on the spot.
DamnianWayne@lemmy.world 3 weeks ago
Well my AI says it will take 96 or 98 million jobs, depending on what you want it say and only for $5,000.
weirdbeardgame@lemmy.world 3 weeks ago
The Senate will decide its fate.
expatriado@lemmy.world 3 weeks ago
the onion? looks like chatgpt already misplaced adviser to congress jobs
latenightnoir@lemmy.blahaj.zone 3 weeks ago
This is how AI will take over… not by wars or competence, but by being better at bureaucratic forgeries…
Gullible@sh.itjust.works 3 weeks ago
AI politicians might be the move after next.
Corporate personhood(you are here) ->
Corporation self advocates ->
Corporations run for office
I don’t like this future. I’d like to go back.
zqwzzle@lemmy.ca 3 weeks ago
I hate to break it to you….
www.bbc.com/news/articles/cm2znzgwj3xo
Lucidlethargy@sh.itjust.works 3 weeks ago
It’s easy when the first line of every reply is “oh, you’re so goddamn smart. Holy shit, are you the smartest person in the world for asking that question?..”
youtu.be/TuEKb9Ktqhc
Lydia_K@lemmy.world 3 weeks ago
This.