Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.
in other words please help us, use our AI
Submitted 3 weeks ago by throws_lemy@reddthat.com to technology@lemmy.world
Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.
in other words please help us, use our AI
So…he has something USELESS and he wants everybody to FIND a use for it before HE goes broke?
I’ll get right on it.
Nice paraphrasing!
I was expecting something much worse but to me it deels like he’s saying “we, the people working on this stuff, need to find real use cases that actually justifies the expense” which is…pretty reasonable
Not defending him or Microsoft at all here but it sounds like normal business shit, not a CEO begging users to like their product
I mean, it would be a lot more reasonable if the entire tech industry hadn’t gone absolutely 100% all-in on investing billions and billions of dollars into the technology before realizing that they didn’t have any use cases to justify that investment.
It‘s insane how he says „we“ not as in „we at Microsoft“ but as in „Me, I and myself as the sole representative of the world economy say: Find use cases for my utterly destructive slop machine… or else!“
Tech CEOs have all gone mad by protagonist syndrome.
Well, he is the “money man”. He doesn’t DO any of the work himself, he “buys” workers.
He has NO skill, NO knowledge, NO training, NO license. Just money. All you need is money.
“Social permission” is one word for it.
Most people don’t realize this is happening until it hits their electric bills. Microslop isn’t permitted to steal from us. They’re just literal thieves.
[Microsoft are] just literal thieves.
Always have been.
(But now it’s worse because it’s the entire public, not just their competitors)
They aren’t Microsoft anymore, they’re full on Microslop.
you will enjoy your chatbot that confidently tells lies while electricity bill goes up by 50% and the nearby datacentres try to make the next model not use em-dashes
As a long-time user of the em-dash I’m pissed off that my usual writing style now makes people think I used AI. I have to second-guess my own punctuation and paraphrase.
How can you lose social permission that you never had in the first place?
The peasants might light their torches
Datacenters are expensive and soft targets.
This guy knows how to translate billionaire dipshit speak.
“Torching” the gas turbines what are on AI companies datacenters would be highly effective. Especially since they are outside and only a fence protects them.
It is so dump what they gas our environment for “AI”. It was evil doing it in WW1 and WW2 and it is still today. See:
It is insane.
There’s a latency between asking for forgiveness and being demanded to stop.
It’s easier to beg for social forgiveness than it is to ask for social permission
The whole point of “AI” is to take humans OUT of the equation, so the rich don’t have to employ us and pay us. Why would we want to be a part of THAT?
AI data centers are also sucking up all the high quality GDDR5 ram on the market, making everything that relies on that ram ridiculously expensive.
Not to mention the water depletion and electricity costs that the people who live near AI data centers have to deal with, because tech companies can’t be expected to be responsible for their own usage.
I mean, do you really think it’s better idea to let them build their own water and power system separate?
They should be forced to upgrade the existing infrastructure so everyone benefits.
I’d love to take humans out of the equation of all work possible. The problem is how the fascist rulers will treat the now unemployed population.
Yep. Ideal future is robots do all the work for us while we enjoy life.
But realistic future is rich people enjoy life while normal people starve.
I have to agree, even if i have no issue with GenAI itself. No one needs that many datacenters as they are planning. Adoption will crash as soon as they try monetizing it for real. Even if they try using cloud gaming as a load in those centers - not one person i know would trade their local PC for something that’s dependent on a fast internet connection without data caps and introduces permanent 100ms+ delay on all games.
I swapped my 3070Ti 8GB to a 5070 16GB, if i sell off the 3070TI the upgrade cost me 300€ (but i tend to keep it as a backup), and I can run my local GenAI and LLM without issues now, I don’t need datacenters, i need CDNs so i can get my content i run locally. and TBH if they really try to kill local compute in gaming: i have enough games here to last me for a decade or more without getting bored, and i can play all of that while sitting in a mountain cabin.
To point out they already tried this for gamers specifically with GeForce now, stadia, etc and its not exactly a cash cow. Not sure why they think a whole pc is preferable.
I mean, this is literally an argument against using oxen to plough fields instead of doing it by hand.
The answer is always that society should reorient around not needing constant labour and wealth being redistributed.
The five stages of corporate grief:
Denial: “AI will be huge and change everything!”
Anger: “noooo stop calling it slop its gonna be great!”
Bargaining: “please use AI, we spent do much money on it!”
Depression: companies losing money and dying (hopefully)
Acceptance: everyone gives up on it (hopefully)
Acceptance: It will be reduced to what it does well and priced high enough so it doesn’t compete with equivalent human output. Tons of useless hardware will flood the market, china will buy it back and make cheap video cards from the used memory.
Which seems like good progress. I feel like they were on denial not three weeks ago.
May the depression be long lasting and heartfelt in the United States of AI.
Correct, but needs clarification:
Depression referring to the whole economy as the bubble burst.
Acceptance is when the government accepts to bail them out because they’re too big and the gov is too dependent on them to let them die.
“Microsoft thinks it has social permission to burn the planet for profit” is all I’m hearing.
Well, they at least have investor permission…which is the only people they care about anyway
Probably in the Hobbes sense that they’re not actively revolting
"social permission"?
Society didn't even permit you and others to spread AI onto everyone to begin with.
I don’t think there’s a single data center anywhere that a significant amount of locals are even ambilivient a out, let along support…
In pretty much every place, they’re getting massive tax breaks citizens pay for, and cheaper energy prices because citizens will pay the higher cost due to increased demand of the data center.
We need to seize all this shit from corps.
Stop fucking around, once trump is handled we need to nationalize a whole lot of shit that’s been privatized the last 50 years.
I’ve also read reports that the noise levels and pollution coming out of these things is staggering. Not to mention they appear to be built as quickly as possible with little regard for laws and regulations.
Data centers should be demolished by act of a real leader if we ever get one. They got this data by corrupt means perverting our laws and regulators. They deserve to lose their entire investments, that information is a threat to society and we should not allow tech lords to control it.
techbros don’t understand consent
As opposed to legal permission, which, hahahahaha
In English: “they’re talking about guillotines a lot”
Fuck you
Translation: Microslop is finally realizing that they vastly miscalculated the cost/benefit ratio of AI tech.
The oligarch class is again showing why we need to upset their cart.
Social permission? I dont remember that we had a vote or something on this bullshit.
As far as I can tell there hasn’t been any tangible reward in terms of pay increase, promotion or external recruitment from using the cognitive amplifier.
AI industry needs to encourage job seekers to pick up AI skills (undefined), in the same way people master Excel to make themselves more employable.
Has anyone in the last 15 years willingly learned excel? It seems like one of those things you have to learn on the job as your boomer managers insist on using it.
you never had it to begin with. Goddamn leeches.
I will try to have a balanced take here:
The positives:
The negatives
Overall I wish the AI bubble burst already
I hope all parties responsible for this garbage, including Microsoft will pay a huge price in the end. Fuck all these morons.
Stop shilling for these corporate assholes or you will own nothing and will be forced to be happy.
Just make copilot it’s own program that is uninstallable, remove it from everywhere else in the OS, and let it be. People who want it will use it, people who don’t want it won’t. Nobody would be pissed at Microsoft over AI if that is what they had done from the start.
You already don’t have social permission to do what you are doing, and that hasn’t stopped you. The world is bigger than the 10 people around your board’s table.
Take away: 1.MS is well aware AI is useless. 2.Nadella admits they invested G$ in something without having the slightest clue what its use-cas would be (“something something rEpLaCe HuMaNs”) 3.Nadella is blissfully unaware of the “social” image MS already has in the eye of the public. You don’t have our social permission to still live as a company!
Fuck this loser. We have enough issues to deal with on a daily basis. We don’t need to subsidize your fear of having wasted ungodly amounts of money and becoming irrelevant.
That’s a YOU problem, fool.
Best use for AI is CEO replacement
Delusional, created a solution to a problem that doesn’t exist to usurp the power away from citizens and concentrate it in the minority.
This is the opposite of the information revolution. This is the information capture. It will be sold back to the people it was taken from while being distorted by special interests.
Textbook definition of a solution searching for a problem.
I work in AI and the only obvious profit is the ability to fire workers. Which they need to rehire after some months, but lowering wages. It is indeed a powerful tool, but tools are not driving profits. They are a cost. Unless you run a disinformation botnet, scamming websites, or porn. It is too unpredictable to really automatize software creation ( fuzzy is the term, we somehow mitigate with stochastic approach ). Probably movie industry is also cutting costs, but not sure.
AI is the way capital is trying to acquire skills cutting off the skilled.
AI isn’t at all reliable.
Worse, it has a uniform distribution of failures in the domain of seriousness of consequences - i.e. it’s just as likely to make small mistakes with miniscule consequences as major mistakes with deadly consequences - which is worse than even the most junior of professionals.
(This is why, for example, an LLM can advise a person with suicidal ideas to kill themselves)
Then on top of this, it will simply not learn: if it makes a major deadly mistake today and you try to correct it, it’s just as likely to make a major deadly mistake tomorrow as it would be if you didn’t try to correct it. Even if you have access to actually adjust the model itself, correcting on kind of mistake just moves the problem around and is akin to trying to stop the tide on a beach with a sand wall - the only way to succeed is to have a sand wall for the whole beach, by which point it’s in practice not a beach anymore.
You can compensate for this by having human oversight on the AI, but at that point you’re just back at having to pay humans for the work being done, so now instead of having to the cost of a human to do the work, you have the cost of the AI to do the work + the cost of the human to check the work of the AI and the human has to check the entirety of the work just to make sure and, worse, unlike a human the AI work will never improve and it will never include the kinds of improvements that humans doing the same work will over time discover in order to make later work or other elements of the work be easier to do (i.e. the product of experience).
This seriously limits the use of AI to things were the consequences of failure can never be very bad (and if you also include businesses, “not very bad” includes things like “not significantly damage client relations” which is much broader than merely “no be life threathening”), so mostly entertainment and situations were the AI alerts humans for a potential situation found within a massive dataset were if the AI fails to spot it, it’s alright (so for example, face recognition in video streams for the purpose of general surveillance, were humans were watching those video streams are just or more likely to miss it) and if the AI incorrectly spots something that isn’t there the subsequent human validation can dismiss it as a false positive.
So AI is a nice new technological tool in a big toolbox, not a technological and business revolution justifying the stock market valuations around it and investment money sunk into it.
“bend the productivity curve” is such a beautiful to say that they are running out of ideas on how to sell that damn thing.
It basically went from :
… to “bend the productivity curve”. It’s not how it “radically increase productivity” no it’s a lot more subtle than that, to the point that it can actually bend that curve down. What a shit show.
Avoid spending trillions on a product nobody wants to pay for.
I already had enough reasons not to bother using it, he didn’t need to give me another one!
kescusay@lemmy.world 3 weeks ago
“Cognitive amplifier?” Bullshit. It demonstrably makes people who use it stupider and more prone to believing falsehoods.
I’m watching people in my industry (software development) who’ve bought into this crap forget how to code in real-time while they’re producing the shittiest garbage I’ve laid eyes on as a developer. And students who are using it in school aren’t learning, because ChatGPT is doing all their work - badly - for them. The smart ones are avoiding it like the blight on humanity that it is.
wizardbeard@lemmy.dbzer0.com 3 weeks ago
As evidence: How the fuck is a company as big as Microsoft letting their CEO keep making such embarassing public statements? How the fuck has he not been forced into more public speaking training by the board?
This is like the 4th “gaffe” of his since the start of the year!
You don’t usually need “social permission” to do something good. Mentioning that is at best, publicly stating that you think you know what’s best for society (and they don’t). I think the more direct interpretation is that you’re openly admitting you’re doing the type of thing that you should have asked permission for, but didn’t.
This is past the point of open desperation.
Kyouki@lemmy.world 3 weeks ago
Love your name.
Wild guess here is the social one is the one where most countries has allowed them to do what it takes and special contract deals.
Likely not public socially. At least, I doubt that.
Last time they were crying that nobody wanted it and made the word bad. It’s all kinda strategy to converse most amount of people you can. Like other users mentioned above the post of people in their org using gpt. I see this too in my org and by variety or engineers or regular folks and I face palm every time because you get responses that roughly makes sense but contextually are horrendously poor and misunderstood entirely.
Desperation probably because they invested so much money on something of a demand that doesn’t even exit yet.
devfuuu@lemmy.world [bot] 3 weeks ago
And they are all getting dependent and addicted to something that is currently almost “free” but the monetization of it all will soon come in force. Good luck having the money to keep paying for it or the capacity to handle all the advertisement it will soon start to push out. I guess the main strategy is manipulate people into getting experience with it with these 2 or 3 years basically being equivalent to a free trial and ensuring people will demand access to the tools from their employees which will pay from their pockets. When barely anyone is able to get their employers to pay for things like IDEs… Oh well.
ThunderWhiskers@lemmy.world 3 weeks ago
We watched this exact same tactic happen with Xbox gamepass over the last 5 years. They introduced it and left in the capability to purchase the “upgrade” for $1/year. Now they are suddenly cranking it up to $30/month and people are still paying it because they feel like it’s a service they “have to have”.
aramis87@fedia.io 3 weeks ago
Hell, Microsoft and Apple did the same thing decades ago. Microsoft offered computer discounts to high schools and colleges, so that the students would be used to (and demand) Microsoft when they went into the business world. Apple then undercut that by offering very discounted products to elementary and junior high schools, so that the students would want Apple products in higher education and the business world.
The tactic let them write off all the discounts on their taxes, but lock in customers and raise prices on business (and eventually consumer) goods.
hushable@lemmy.world 3 weeks ago
I just spent two days fixing multiple bugs introduced by some AI made changes, the person who submitted them, a senior developer, had no idea what the code was doing, he just prompted some words into Claude and submitted it without checking if it even worked, then it was “reviewed” and blindly approved by another coworker who, in his words, “if the AI made it, then it should be alright”
MonkderVierte@lemmy.zip 3 weeks ago
Show him the errors of his ways. People learn best by experience.
ech@lemmy.ca 3 weeks ago
This is the one that really concerns me. It feels like generations of students are just going to learn to push the slop button for any and everything they have to do. Even if these bots were everything techbros claimed they are, this would still be devastating for society.
jmill@lemmy.zip 3 weeks ago
Well, one way or another it won’t be too many generations. Either we figure out it’s a bad idea or sooner or later things will go off the wheels enough that we won’t maintain the infrastructure to support everyone using this type of “AI”. Being kind of right 90% of the time is not good enough at a power plant.
floofloof@lemmy.ca 3 weeks ago
I’ve been programming professionally for 25 years. Lately we’re all getting these messages from management that don’t give requirements but instead give us a heap of AI-generated code and say “just put this in.” We can see where this is going: management are convincing themselves that our jobs can be reduced to copy-pasting code generated by a machine, and the next step will be to eliminate programmers and just have these clueless managers. I think AI is robbing management of skills as well as developers. They can no longer express what they want (not that they were ever great at it): we now have to reverse-engineer the requirements from their crappy AI code.
nulluser@lemmy.world 3 weeks ago
It may be time for some malicious compliance.
Don’t reverse engineer anything. Do as your told and “just put this in” and deploy it. Everything will break and management will explode, but now you’ve demonstrated that they can’t just replace you with AI.
Now explain what you’ve been doing (reverse engineering to figure out their requirements), but that you’re not going to do that anymore. They need to either give you proper requirements so that you can write properly working code, or they give you AI slop and your just going “put it in” without a second thought. L
You’ll need your whole team on board for this to work, but what are they going to do, fire the whole team and replace them with AI? You’ll have already demonstrated that that’s not an option.
kescusay@lemmy.world 3 weeks ago
So in your case, not only is the LLM coding assistant not making you faster, it’s actively impeding your productivity and the productivity of your stakeholders. That sucks, and I’m sorry you’re having to put up with it.
I’m lucky that in my day job, we’re not (yet) forced to use LLMs, and the “AI coding platform” our upper management is trying to bring on as an option is turning out to be an embarrassing boondoggle that can’t even pass cybersecurity review. My hope is that the VP who signed off on it ends up shit-canned because it’s such a piece of garbage.
Feyd@programming.dev 3 weeks ago
Yes. Then I come on Lemmy and see a dedicated pack of heralds concurrently professing that they do the work of 10 devs while eating bon bons and everyone that isn’t using it is stupid. So annoying
kescusay@lemmy.world 3 weeks ago
God, that’s so frustrating. I want to shake them and shout, “No, your code is 100% ass now, but you don’t know it because it passes tests that were written by the same LLM that wrote your code! And you have barely laid eyes on it, so you’re forgetting what good code even looks like!”
FireRetardant@lemmy.world 3 weeks ago
I decided not to finish my college program partially because of AI like chatgpt. My last 2 semesters would have been during the pandemic with an 8 month work term before. Covid ended up canceling the work term and would give me the credit anyway. The rest of the classes would all be online and mostly multiple choice quizs. There wasn’t a lot of AI scanning tech for academic submissions yet either. I felt if i continued, I’d be getting a worse product for the same price (online vs in class/lab), wont get that valuble work experience, and id be at a disadvantage if i didnt use AI in my work.
Luckily my program had a 2 year of 3 year option. The first 2 years of the 3 year is the same so i just took the 2 year cert and got out.
fushuan@lemmy.blahaj.zone 3 weeks ago
Wym you would be at a disadvantage? College isn’t a competition. By not using AI in the learning process and submissions you might get a lower grade than others, but trust me no one fucking checks your college grades. They check if you know what you are doing.
In fact you wouldn’t get a lower grade, others would have an inflated grade which then won’t translate to skills and will have issues in the workforce.
jaybone@lemmy.zip 3 weeks ago
Was AI really that big of a thing at the time of Covid?
firebyte@lemmy.world 3 weeks ago
Demonstrably proven, too.
www.media.mit.edu/…/your-brain-on-chatgpt/
JeffreyOrange@lemmy.world 3 weeks ago
Istudy mechatronics in Germany and I don’t avoid it. I habe yet to meet a single person who is avoiding it. I have made tremendous progress learning with it. But that is mostly the case because my professors refuse to give solutions for the seminars. Learning is probably the only real advantage that I have seen yet. If you don’t use it for cheating or shorcuts, which is of course a huge problem. But getting answers to problems, getting to ask specific follow up questions and most of all researching and getting to the right information faster (through external links from AI) has made studying much more efficient and enjoyable for me.
I don’t like the impact on society AI is having bur personally it has really helped me so far. (discounting the looming bubble crises and the market effect it is having on memory f.e.)
AnAbsurdlyAgitatedAnaconda@lemmynsfw.com 2 weeks ago
Yeah, it is a tool, and it has to be used correctly. It also offers a trade off when you research some topics: You gain time, but slowly lose the ability to conduct the research yourself. If I don’t have time constraints I avoid AI, so I can maintain my skill of searching, categorizing, and piecing together information, which is a key skill in a fast moving industry (SW dev)
Also for learning I usually use it for follow-up questions, without a base understanding it can halicunate whatever and spoon feed it to my brain. Nothing can compete with an AI which designed to burp out the most sound phrases ever existed. Unfortunately correctness is not on par with it.
I often help my yunger sister, she wants to learn programing, and I noticed she uses extensive amount of AI. She can solve issues with the help of an AI but cannot solve it alone. At least its not Vibe coding, she uses it for sub-tasks. But I fear it hinders her learning.