Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.
in other words please help us, use our AI
Submitted 1 day ago by throws_lemy@reddthat.com to technology@lemmy.world
Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.
in other words please help us, use our AI
AI can absolutely be useful. But it’s been wildly oversold and the actual beneficial use cases are not nearly as profitable as the marketing around it
I hope all parties responsible for this garbage, including Microsoft will pay a huge price in the end. Fuck all these morons.
Stop shilling for these corporate assholes or you will own nothing and will be forced to be happy.
I work in AI and the only obvious profit is the ability to fire workers. Which they need to rehire after some months, but lowering wages. It is indeed a powerful tool, but tools are not driving profits. They are a cost. Unless you run a disinformation botnet, scamming websites, or porn. It is too unpredictable to really automatize software creation ( fuzzy is the term, we somehow mitigate with stochastic approach ). Probably movie industry is also cutting costs, but not sure.
AI is the way capital is trying to acquire skills cutting off the skilled.
AI is the way capital is trying to acquire skills cutting off the skilled.
They are banking on that. They have been talking about replacing humanity for decades. But what rhat means is a few select humans (I.E. them) will survive and be tended to hand and foot by AI who will also invent things for them.
They want that. We aren’t there yet… and probably never will. But that is what they want.
“Cognitive amplifier?” Bullshit. It demonstrably makes people who use it stupider and more prone to believing falsehoods.
I’m watching people in my industry (software development) who’ve bought into this crap forget how to code in real-time while they’re producing the shittiest garbage I’ve laid eyes on as a developer. And students who are using it in school aren’t learning, because ChatGPT is doing all their work - badly - for them. The smart ones are avoiding it like the blight on humanity that it is.
As evidence: How the fuck is a company as big as Microsoft letting their CEO keep making such embarassing public statements? How the fuck has he not been forced into more public speaking training by the board?
This is like the 4th “gaffe” of his since the start of the year!
You don’t usually need “social permission” to do something good. Mentioning that is at best, publicly stating that you think you know what’s best for society (and they don’t). I think the more direct interpretation is that you’re openly admitting you’re doing the type of thing that you should have asked permission for, but didn’t.
This is past the point of open desperation.
Love your name.
Wild guess here is the social one is the one where most countries has allowed them to do what it takes and special contract deals.
Likely not public socially. At least, I doubt that.
Last time they were crying that nobody wanted it and made the word bad. It’s all kinda strategy to converse most amount of people you can. Like other users mentioned above the post of people in their org using gpt. I see this too in my org and by variety or engineers or regular folks and I face palm every time because you get responses that roughly makes sense but contextually are horrendously poor and misunderstood entirely.
Desperation probably because they invested so much money on something of a demand that doesn’t even exit yet.
I’m watching people in my industry (software development) who’ve bought into this crap forget how to code in real-time while they’re producing the shittiest garbage I’ve laid eyes on as a developer.
I just spent two days fixing multiple bugs introduced by some AI made changes, the person who submitted them, a senior developer, had no idea what the code was doing, he just prompted some words into Claude and submitted it without checking if it even worked, then it was “reviewed” and blindly approved by another coworker who, in his words, “if the AI made it, then it should be alright”
“if the AI made it, then it should be alright”
Show him the errors of his ways. People learn best by experience.
And they are all getting dependent and addicted to something that is currently almost “free” but the monetization of it all will soon come in force. Good luck having the money to keep paying for it or the capacity to handle all the advertisement it will soon start to push out. I guess the main strategy is manipulate people into getting experience with it with these 2 or 3 years basically being equivalent to a free trial and ensuring people will demand access to the tools from their employees which will pay from their pockets. When barely anyone is able to get their employers to pay for things like IDEs… Oh well.
We watched this exact same tactic happen with Xbox gamepass over the last 5 years. They introduced it and left in the capability to purchase the “upgrade” for $1/year. Now they are suddenly cranking it up to $30/month and people are still paying it because they feel like it’s a service they “have to have”.
Hell, Microsoft and Apple did the same thing decades ago. Microsoft offered computer discounts to high schools and colleges, so that the students would be used to (and demand) Microsoft when they went into the business world. Apple then undercut that by offering very discounted products to elementary and junior high schools, so that the students would want Apple products in higher education and the business world.
The tactic let them write off all the discounts on their taxes, but lock in customers and raise prices on business (and eventually consumer) goods.
And students who are using it in school aren’t learning, because ChatGPT is doing all their work - badly - for them.
This is the one that really concerns me. It feels like generations of students are just going to learn to push the slop button for any and everything they have to do. Even if these bots were everything techbros claimed they are, this would still be devastating for society.
Well, one way or another it won’t be too many generations. Either we figure out it’s a bad idea or sooner or later things will go off the wheels enough that we won’t maintain the infrastructure to support everyone using this type of “AI”. Being kind of right 90% of the time is not good enough at a power plant.
I’ve been programming professionally for 25 years. Lately we’re all getting these messages from management that don’t give requirements but instead give us a heap of AI-generated code and say “just put this in.” We can see where this is going: management are convincing themselves that our jobs can be reduced to copy-pasting code generated by a machine, and the next step will be to eliminate programmers and just have these clueless managers. I think AI is robbing management of skills as well as developers. They can no longer express what they want (not that they were ever great at it): we now have to reverse-engineer the requirements from their crappy AI code.
but instead give us a heap of AI-generated code and say “just put this in.”
we now have to reverse-engineer the requirements from their crappy AI code.
It may be time for some malicious compliance.
Don’t reverse engineer anything. Do as your told and “just put this in” and deploy it. Everything will break and management will explode, but now you’ve demonstrated that they can’t just replace you with AI.
Now explain what you’ve been doing (reverse engineering to figure out their requirements), but that you’re not going to do that anymore. They need to either give you proper requirements so that you can write properly working code, or they give you AI slop and your just going “put it in” without a second thought. L
You’ll need your whole team on board for this to work, but what are they going to do, fire the whole team and replace them with AI? You’ll have already demonstrated that that’s not an option.
So in your case, not only is the LLM coding assistant not making you faster, it’s actively impeding your productivity and the productivity of your stakeholders. That sucks, and I’m sorry you’re having to put up with it.
I’m lucky that in my day job, we’re not (yet) forced to use LLMs, and the “AI coding platform” our upper management is trying to bring on as an option is turning out to be an embarrassing boondoggle that can’t even pass cybersecurity review. My hope is that the VP who signed off on it ends up shit-canned because it’s such a piece of garbage.
I’m watching people in my industry (software development) who’ve bought into this crap forget how to code in real-time while they’re producing the shittiest garbage I’ve laid eyes on as a developer.
Yes. Then I come on Lemmy and see a dedicated pack of heralds concurrently professing that they do the work of 10 devs while eating bon bons and everyone that isn’t using it is stupid. So annoying
God, that’s so frustrating. I want to shake them and shout, “No, your code is 100% ass now, but you don’t know it because it passes tests that were written by the same LLM that wrote your code! And you have barely laid eyes on it, so you’re forgetting what good code even looks like!”
“Cognitive amplifier?” Bullshit. It demonstrably makes people who use it stupider and more prone to believing falsehoods.
Demonstrably proven, too.
EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use.
I decided not to finish my college program partially because of AI like chatgpt. My last 2 semesters would have been during the pandemic with an 8 month work term before. Covid ended up canceling the work term and would give me the credit anyway. The rest of the classes would all be online and mostly multiple choice quizs. There wasn’t a lot of AI scanning tech for academic submissions yet either. I felt if i continued, I’d be getting a worse product for the same price (online vs in class/lab), wont get that valuble work experience, and id be at a disadvantage if i didnt use AI in my work.
Luckily my program had a 2 year of 3 year option. The first 2 years of the 3 year is the same so i just took the 2 year cert and got out.
Wym you would be at a disadvantage? College isn’t a competition. By not using AI in the learning process and submissions you might get a lower grade than others, but trust me no one fucking checks your college grades. They check if you know what you are doing.
In fact you wouldn’t get a lower grade, others would have an inflated grade which then won’t translate to skills and will have issues in the workforce.
Was AI really that big of a thing at the time of Covid?
Istudy mechatronics in Germany and I don’t avoid it. I habe yet to meet a single person who is avoiding it. I have made tremendous progress learning with it. But that is mostly the case because my professors refuse to give solutions for the seminars. Learning is probably the only real advantage that I have seen yet. If you don’t use it for cheating or shorcuts, which is of course a huge problem. But getting answers to problems, getting to ask specific follow up questions and most of all researching and getting to the right information faster (through external links from AI) has made studying much more efficient and enjoyable for me.
I don’t like the impact on society AI is having bur personally it has really helped me so far. (discounting the looming bubble crises and the market effect it is having on memory f.e.)
Fuck you
And eco-terrorism in the sense of destroying the environment, as opposed to destroying attempts at destroying like thr Unabomber.
do something useful
Skynet or China wins is the goal. It’s useful to US empire to do Palantir surveilllance for “Patriotic subservience to Israel first agenda”. Robocops instead of ICE officers provides useful increase in bravery to apply fascism. We must race China in robots, without any manufacturing aptitude, or power capacity, with only extortionist oligarch power expansion options, under an oligarchist, corporatist, zionist supremacist fascism to concentrate oligarchy and fascism further, so as to force China to keep up and “everyone” (important) makes money playing the game of winning is which side gets destroyed more.
So, as long as we view US empire as useful, Skynet is very useful. We can pretend that some other apps will be useful (Nadela is saying “just buy a PC and learn excel to be useful” as main point), but all of big tech is courting US government for big datacenter use, and political unanimity for war on China, means there is no other “useful” application required.
The most important social permission for AI, is the permission to fund Skynet, and the permission for warmongering military budget and attitude. An Israel/Oligarchist first rulership means there is never any money for any other purpose than that supremacism. The destination of collapse is a consequence only for the little people. Wealth “creation” (pillaging) in the journey, and escape from consequences of collapse.
Congrats, an LLM would be unable to spew all that crap in a million years
AI isn’t at all reliable.
Worse, it has a uniform distribution of failures in the domain of seriousness of consequences - i.e. it’s just as likely to make small mistakes with miniscule consequences as major mistakes with deadly consequences - which is worse than even the most junior of professionals.
(This is why, for example, an LLM can advise a person with suicidal ideas to kill themselves)
Then on top of this, it will simply not learn: if it makes a major deadly mistake today and you try to correct it, it’s just as likely to make a major deadly mistake tomorrow as it would be if you didn’t try to correct it. Even if you have access to actually adjust the model itself, correcting on kind of mistake just moves the problem around and is akin to trying to stop the tide on a beach with a sand wall - the only way to succeed is to have a sand wall for the whole beach, by which point it’s in practice not a beach anymore.
You can compensate for this by having human oversight on the AI, but at that point you’re just back at having to pay humans for the work being done, so now instead of having to the cost of a human to do the work, you have the cost of the AI to do the work + the cost of the human to check the work of the AI and the human has to check the entirety of the work just to make sure and, worse, unlike a human the AI work will never improve and it will never include the kinds of improvements that humans doing the same work will over time discover in order to make later work or other elements of the work be easier to do (i.e. the product of experience).
This seriously limits the use of AI to things were the consequences of failure can never be very bad (and if you also include businesses, “not very bad” includes things like “not significantly damage client relations” which is much broader than merely “no be life threathening”), so mostly entertainment and situations were the AI alerts humans for a potential situation found within a massive dataset were if the AI fails to spot it, it’s alright (so for example, face recognition in video streams for the purpose of general surveillance, were humans were watching those video streams are just or more likely to miss it) and if the AI incorrectly spots something that isn’t there the subsequent human validation can dismiss it as a false positive.
So AI is a nice new technological tool in a big toolbox, not a technological and business revolution justifying the stock market valuations around it and investment money sunk into it.
I generally agree with you, but I think the broadest category of useful applications is missing: Where it’s easy to check if the output makes sense. Or more precisely, applications where it’s easier to select the good outputs of an AI then to create them yourself.
Yeah.
Whilst I didn’t explicitly list that category as such, if you think about it, my AI for video surveillance and AI for scientific research examples are both in it.
Several flaws here: dependomg on the tasks, you can train and retrain models. Instruct new ones. Previous errors will be greatly reduced, or disappear completely. ( if we talk about errors only ). Hallucinations are mathematically certain for less specialized models, but this is another problem all togheter.
Using ai is indeed saving money ( and time ). It excels at tedious tasks with well defined constraints. This saves me so much time everyday: ie: find X in dataset Y that do not much Z. This work was usually done by humans, with an higher error rate. If I take 3 minutes to classify 1 millions rows, which would have took me at least 3 days before, that is money saved.
This said, they trying to push the reverse centaur approach, human overseeing the ai worker, which is flawed. But companies reason in stakeholders profile and 3 months windows.
When I started as a junior i was the guy classifying 1 M records. That is how I leaned. Now we dont have juniors anymore. But companies seems to dont care about the next 5 years.
So…he has something USELESS and he wants everybody to FIND a use for it before HE goes broke?
I’ll get right on it.
It‘s insane how he says „we“ not as in „we at Microsoft“ but as in „Me, I and myself as the sole representative of the world economy say: Find use cases for my utterly destructive slop machine… or else!“
Tech CEOs have all gone mad by protagonist syndrome.
Well, he is the “money man”. He doesn’t DO any of the work himself, he “buys” workers.
He has NO skill, NO knowledge, NO training, NO license. Just money. All you need is money.
Nice paraphrasing!
I was expecting something much worse but to me it deels like he’s saying “we, the people working on this stuff, need to find real use cases that actually justifies the expense” which is…pretty reasonable
Not defending him or Microsoft at all here but it sounds like normal business shit, not a CEO begging users to like their product
I mean, it would be a lot more reasonable if the entire tech industry hadn’t gone absolutely 100% all-in on investing billions and billions of dollars into the technology before realizing that they didn’t have any use cases to justify that investment.
“bend the productivity curve” is such a beautiful to say that they are running out of ideas on how to sell that damn thing.
It basically went from :
… to “bend the productivity curve”. It’s not how it “radically increase productivity” no it’s a lot more subtle than that, to the point that it can actually bend that curve down. What a shit show.
I recently had an hook to get some investment for a startup. Money is flowing in this sector. The investor told me: find me any idea that might sell, be useful.
I went to speak with 3 associations of entrepreneurs in 3 different countries. Like guys, we have the money, give me some ideas, all services will be free for.you. All these entrepreneurs did not know where to fit AI if not for some support chat.
The only thing I use AI for right now is spouting nonsense at it for a joke. For example I would ask ‘why didn’t (insert well known figure here) buy me lunch?’ Or ‘I farted and they cleared out a 10 block radius and called in a chemical weapons cleanup crew, is this normal?’
Shit like that.
I like giving it impossible tasks, like spell OPERATION with only 4 letters, and arguing with it as it refuses to admit that I’m wrong and have requested something impossible, or when it tries to cut corners. “No, I don’t want an abbreviation, or a word that means the same thing, I want you to spell the full word OPERATION with only 4 letters. Why can’t you get this right?”
Just make copilot it’s own program that is uninstallable, remove it from everywhere else in the OS, and let it be. People who want it will use it, people who don’t want it won’t. Nobody would be pissed at Microsoft over AI if that is what they had done from the start.
No, it will be attached to every application, as well as the start menu, settings, notepad, paint, regedit, calculator and every other piece of windows you AI hating swine
we attached it to the clock in case you need it to get the time wrong.
Right, except that unlike Explorer or IE after that it siphon everything it can to send it back to Redmond so even if one does not use it, it is STILL a problem.
How can you lose social permission that you never had in the first place?
The peasants might light their torches
Datacenters are expensive and soft targets.
This guy knows how to translate billionaire dipshit speak.
“Torching” the gas turbines what are on AI companies datacenters would be highly effective. Especially since they are outside and only a fence protects them.
It is so dump what they gas our environment for “AI”. It was evil doing it in WW1 and WW2 and it is still today. See:
It is insane.
There’s a latency between asking for forgiveness and being demanded to stop.
It’s easier to beg for social forgiveness than it is to ask for social permission
Eeh didn’t you pay attention in economy 101? If you generate more supply than demand that’s a you problem. The free market will take care.
College degrees are scraps of paper to then. They go to those places to find people and make connections.
I went to university to learn and earn a degree. I didn’t make connections. Hence why I never landed a job.
The products and services around ‘AI’ are deficient and dangerous, that’s what the market says. There’s no demand for bullshit products. It is the ignorance and unwillingness to understand of the tech bros that is revealed here. They don’t listen to the market, aka. the people.
I guess the sunken cost fallacy does its part as well
Delusional, created a solution to a problem that doesn’t exist to usurp the power away from citizens and concentrate it in the minority.
This is the opposite of the information revolution. This is the information capture. It will be sold back to the people it was taken from while being distorted by special interests.
Paper books are the way.
Software that respects your freedom is the way
That is why they burn them in the fictional story Fahrenheit 451.
The oligarch class is again showing why we need to upset their cart.
“Microsoft thinks it has social permission to burn the planet for profit” is all I’m hearing.
Well, they at least have investor permission…which is the only people they care about anyway
Probably in the Hobbes sense that they’re not actively revolting
Social permission? I dont remember that we had a vote or something on this bullshit.
Social permission = shareholder permission
He’s saying “we need an ROI on all the cash we are burning before they sell up and the board kick me out for being a delusional and incompetent buffoon”
Get in the sea Nadella
Perhaps he considers society not insisting their politicians kick them out societal permission.
“Social permission” is one word for it.
Most people don’t realize this is happening until it hits their electric bills. Microslop isn’t permitted to steal from us. They’re just literal thieves.
As far as I can tell there hasn’t been any tangible reward in terms of pay increase, promotion or external recruitment from using the cognitive amplifier.
You don’t have permission 🤷♀️
You should… ASK PEOPLE BEFORE ? 🤷♀️🤷♀️🤷♀️🤷♀️🤷♀️
…
The whole point of “AI” is to take humans OUT of the equation, so the rich don’t have to employ us and pay us. Why would we want to be a part of THAT?
AI data centers are also sucking up all the high quality GDDR5 ram on the market, making everything that relies on that ram ridiculously expensive.
Ah. Is THAT why they’re trying to shove it into everything.
Oh no.
you never had it to begin with. Goddamn leeches.
AI industry needs to encourage job seekers to pick up AI skills (undefined), in the same way people master Excel to make themselves more employable.
Has anyone in the last 15 years willingly learned excel? It seems like one of those things you have to learn on the job as your boomer managers insist on using it.
Translation: Microslop is finally realizing that they vastly miscalculated the cost/benefit ratio of AI tech.
To be honest, I did tried a couple of AI’s. But all I got where solutions that would never work on the stated hardware. Code full of errors and when fixed never functions as requested. On any non-technical questions it’s always agreeing and hardly (not at all actually) challenging any input you give it. So yeah, i’m done with it and waiting for the bubble to burst.
LittleBorat3@lemmy.world 59 minutes ago
Isn’t it alleged that China goes for specific use cases and not general intelligence?
Maybe that’s the way to go and not the gamble that the US and western companies are doing.