Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.
in other words please help us, use our AI
Submitted 1 day ago by throws_lemy@reddthat.com to technology@lemmy.world
Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.
in other words please help us, use our AI
Ah. Is THAT why they’re trying to shove it into everything.
Take away: 1.MS is well aware AI is useless. 2.Nadella admits they invested G$ in something without having the slightest clue what its use-cas would be (“something something rEpLaCe HuMaNs”) 3.Nadella is blissfully unaware of the “social” image MS already has in the eye of the public. You don’t have our social permission to still live as a company!
I have a nagging feeling the general public does not hate Microsoft as much as computer nerds do and so probably overall their image is muddled to not that bad overall.
I believe Windows 11 and “AI everywhere” are quickly changing that. Gamers started migrating, but I believe the stats showing a growing usage of Linux on desktop as well as resistance to the Win11 migration go beyond the gamers.
But I admit: a survey right now may not yet show it. It’s probably trending up slowly.
I will try to have a balanced take here:
The positives:
The negatives
Overall I wish the AI bubble burst already
menial tasks that are important such as unit test coverage
This is one of the cases where AI is worse. LLMs will generate the tests based on how the code works and not how it is supposed to work. Granted lots of mediocre engineers also use the “freeze the results” method for meaningless test coverage, but at least human beings have ability to reflect on what the hell they are doing at some point.
I think machine learning has a vast potential in this area, specifically things like running iterative tests in a laboratory, or parsing very large data sets. But a fuckin LLM is not the solution. It makes a nice translation layer, so I don’t need to speak and understand bleep bloop and can tell it what I want in plain language. But after that LLM seems useless to me outside of fancy search uses. It’s should be the initial processing layer to figure out what type of actual AI (ML) to utilize to accomplish the task. I just want an automator that I can direct in plain language, why is that not what’s happening? I know that I don’t know enough to have an opinion but I do anyway!
Granted lots of mediocre engineers also use the “freeze the results” method for meaningless test coverage,
I’d be interested what you mean by this? Isn’t all unit tests just freezing the result? A method is an algorithm for certain inputs you expect certain outputs, you unit tests these inputs and matching outputs, and add coverage for edge cases because it’s cheap to do with unit tests and these “freeze the results” or rather lock them in so you know that piece of code always works as expected or it’s “frozen/locked in”
LLMs will generate the tests based on how the code works and not how it is supposed to work.
You can tell it to generate based on how it’s supposed to work you know
You could have it write unit tests as black box tests, where you only give it access to the function signature. Though even then, it still needs to understand what the test results should be, which will vary from case to case.
They f’d up with electricity rates and hardware price hikes. They were getting away with it by not inconveniencing enough laymen.
Very few laymen have noticed or give a shit about RAM prices. My young friend across the street and I are likely the only people on the block who know what RAM does, let alone are able to build a PC.
Business purchasing is where we might see some backlash soon. I’ve bought all the IT goods, hardware and software, for my last two companies, and I’d be screaming.
Boss: What the hell? Weren’t we getting these laptops for $1,200 last year?!
So I’m the literal author of the Philosophy of Balance, and I don’t see any reason why LLMs are deserving of a balanced take.
This is how the Philosophy of Balance works: We should strive…
But here’s the thing: LLMs and the technocratic elite funding them are a net negative to humanity and the world at large. Therefore, to strive for a balanced approach towards AI puts you on the wrong side of the battle for humanity, and therefore human history.
Pick a side.
You are presupposing that your opinion about LLMs is absolutely correct and then of course you arrive at your predetermined conclusion.
What about the free LLmodels available out of china and other places that democratizes the LLMs?
Therefore, to strive for a balanced approach towards AI puts you on the wrong side of the battle for humanity, and therefore human history.
Thanks for not being dramatic, lol.
Could not have written my exact take as closely as yours.
Only thing I’d add is using it to screw around with personal photos. ChatGPT is cleaning up some 80s pics of my wife that were atrocious. I have rudimentary PhotoShop skills, but we’d never have these clean pics without AI. OTOH, I’d gladly drop that ability to reclaim all the negatives.
it is useful as a sort of better google, like for things that are documented but reading the documentation makes your head hurt so you can ask it to dumb it down to get the core concept and go from there
I agree with this point so much. I’m probably a real thicko, and being able to use it to explain concepts in a different way or provide analogies has been so helpful for my learning.
I hate the impact from use of AI, and I hope that we will see greater efficiencies in the near future so there is less resource consumption.
Oh no.
Anyway…
Anyway
heres something useful, REMOVE AI from all your products, and undo windows10/11 changes.
“A great commander secures his victory before entering into battle. A poor commander first rushes into battle and then searches for victory.”
~Sun Tzu, The Art of War
Textbook definition of a solution searching for a problem.
Blockchain, NFTs, degenerative AI. I think I see a pattern here.
It’s declaration of moral bancruptcy.
Best use for AI is CEO replacement
The problem with this is the savings will go to share holders not workers.
Next step is to make sure only the workers are shareholders.
The savings will also lead to the corporation’s profits to decline in the mid term, and then the savings will actually go to private equity and hedge funds, not the shareholders.
One small step to the actual solution of “have the employees vote a CEO every 4 years.”
I see…
Fuck you.
Fuck this loser. We have enough issues to deal with on a daily basis. We don’t need to subsidize your fear of having wasted ungodly amounts of money and becoming irrelevant.
That’s a YOU problem, fool.
We need an American Zelenskyy who would save us from the oligarchs.
Like a president but good?
leader of the free world
can’t keep what you never had you corrupt piece of shit.
Dude, you never had "social permission to do this in the first place, none of us asked for this shit. You’re literally destroying the planet and our future for you personal gain. You useless waste of space.
So they pushed to male AI, but never had a good use case for it that was world changing, so now they want help to monetize it.
Feels like most trending tech these days. I’m so tired.
Avoid spending trillions on a product nobody wants to pay for.
I bough a second hand laptop with windows 11 and it had Copilot pushing down your throat.
It’s now running Fedora just fine. And if I want I can spin up a local AI when I decide that I need it.
I already had enough reasons not to bother using it, he didn’t need to give me another one!
They don’t have that permission
I know something useful that can be done with AI in its current form. Toss it in the fucking garbage maybe.
On the one hand, I get it. I really do. It takes an absurd amount of resources for what it does.
On the other hand, I wonder if people said the same of early generation comptuers. UNIVAC used tubes of mercury for RAM and consumed 125KW of electricity to process a whopping 2k operations per second.
Probably not. Most people weren’t aware of it, nor did they have a care for power consumption, water consumption, etc. We were in peak-American Exceptionalism in the post-war era.
But, had they, and computers kinda just…died. Right there, in the 1950s. Would we have gone to the moon? Would we have HDTV? iPhones? Social Media? A treacherous imbecile in charge of the most powerful military the world has ever seen?
Probably not.
So…I do worry about the consumption, and the ecological and environmental impact. But, what if that is a necessary evil for the continued evolution of technology, and with it, society? And, if it is, do we want that?
LLMs are dead end tech which is only useful for people who want to do unethical shit. They’re good at lying, making up nonsense, sounding like humans, facilitating scams, and misleading people. No matter how much time and energy is spent developing them, that’s all they’ll ever be good at. They can get better at doing those things, but they’ll never be good at anything actually useful because of the fact that there is no internal logic going on in them. When it tells you the moon is made of various kinds of rock, the exact same thing is happening as when it tells you the moon is made of cheese and bread. It has no way of distinguishing between these two statements. All of its ‘ideas’ are vapor, an illusion, smoke and mirrors. It doesn’t “understand” anything it’s saying, all it does is generate text that looks like something someone who does understand language would say. There is no logic in the background and there cannot be.
Image
(www.computerhistory.org/revolution/…/83)
early generation computers fueled a demand that was being supplied by rooms and rooms of human calculators calculating and checking each other’s works for scientists, engineers, businesses, and government agencies
Image
(Manhattan Project, Atomic Heritage Foundation picture)
they would not have died out, because they were a necessary part of the evolution of technology at their time. more importantly, they were more accurate than their human calculators. computers don’t forget to carry a number to the next digit or flip them around. barring exceptionally rare cosmic radiation events. and their technological progression fueled an ever greater need until now when tech has entered post-scarcity when it comes to calculating power.
generative AI in contrast was an offering looking for a purpose. spare gigaflops no longer needed for tech people are trying to sell by building more and more hype for calculating power. sucks to be the one who invests into it, but that’s business. sometimes investment don’t work out. if microsoft can’t hype up a demand then it is unnecessary technology.
Those old computers you speak of: They worked. There is no comparison to be made here.
They were built in order to give us an edge on the battlefield. More accurate artillery and the like. They did math which humans could do, but which would take humans weeks or months, and the answers were required within timeframes more like 12 hours, because war.
They were so useful, so valuable, that they were worth the treasure spent. They conferred a kind of superintelligence to their users. Those with brains to understand could see this, and so yes, hobbyists found their way to building their own machines, once small CPUs became available, however janky. Anyone who had to do math, who had to do math, went into debt if they had to, and learned to use these janky beasts because the advantage was weeks or months of time they didn’t have to grind on paper.
There is nothing about AI that resembles any of that.
I appreciate the social permission for so many folks to switch to Linux. KDE has come a long way.
Well you already lost that or rather never actually had that. You all pushed a broken and incomplete product you need to find a use not us…
AI is the only “product” that I’ve ever seen where the sales pitch is, “We made it and now you should want it, but you have to figure out why you want it.”
Even if I do want it, there are plenty of free models that I can use locally, on my desktop PC. They don’t phone home, either, so Nadella doesn’t see a cent from me directly or indirectly.
AI is the new 3dTV
It’s way worse that 3Dtv.
Yes, 3d tv was pushed too soon. If they waited for the glassless technology (like the 3ds screen for example) I think we would have 3d screen everywhere. Now the tech is dead because people had a really bad perception of 3d tv.
Hey, I like my 3D TV. Every once in a while I manage to find a pirated video that’s in 3D and it’s pretty neat. And unlike the current avalanche of generative/LLM bullshit, I can turn the 3D off, and when I do it works just fine as a perfectly ordinary TV, and in no way does it nag me incessantly to turn it back on.
Hey, don’t be mean to 3DTV. At least there’s an actual use case for it. Watching 3D TV or movies, which aren’t actually that popular… Hmmm, I see your point but also counter with the 3D TVs are at least also regular TVs
Do something useful
What do you mean, that using ChatGPT for a recipe for eggs, sunny side up without any seasoning or toppings and burning up the electricity of a moderate household for a week with my query isn’t useful?
Allrecipes has you covered.
No. No, it really doesn’t.
I want a vegan recipe that uses turbinado sugar. I get 3 articles and only one of them is a recipe. If I don’t like that recipe…too bad. That’s what they have.
It’s not the query that burns through electricity like crazy, it’s training the models.
You can run a query yourself at home with a desktop computer, as long as it has enough RAM and compute cells to support the model you’re using (think a few high-end GPUs).
Training a model requires a huge pile of computer power though, and the AI companies are constantly scraping the internet to stealfind more training material
Dunno if that’s true or not. Generally, much more compute is used in inference than training, since you only train once, then use that model for millions of queries or whatever. However, some of these AI companies may be training many models constantly to one-up each-other and pump their stock; dunno. The “thinking” model paradigm is also transferring a lot more compute to inference. IIRC OpenAI spent $300k of compute just for inference to complete a single benchmark a few months ago (and found that, like training, exponentially increasing amounts of compute are needed for small gains in performance).
The number of adults I know who ask ChatGPT for recipes is non-zero.
Teenagers use it like it’s a search engine. They don’t understand the difference.
I got deepseek to run short roleplaying adventures that are surprisingly fun and engaging. It’s an amped up choose your own adventur, so for this application, the future is bright.
Not a single other llm can do this in any way approaching acceptable.
And it still lies and makes shit up, but in a fantasy world, the can let it pass unless it is trying to rob me of experience lol.
When it can do long sessions and entire careers instead of detailed one offs it’ll have found its niche for me. Right now, it’s just a fun toy, prone to hallucinations.
I can’t believe people use these things for code…
Right now, it’s just a fun toy, prone to hallucinations.
That’s the thing though - with an LLM, it’s all “hallucinations”. They’re just usually close to reality, and are presented with an authoritative, friendly voice.
(Or, in your case, they’re usually close to the established game reality!)
This is the thing I hope people learn about LLMs, it’s all hallucinations.
When an LLM has excellent data from multiple sources to answer your question, it is likely to give a correct answer. But, that answer is still a hallucination. It’s dreaming up a sequence of words that is likely to follow the previous words. It’s more likely go give an “incorrect” hallucination when the data is contradictory or vague. But, the process is identical. It’s just trying to dream up a likely series of words.
I can also see a lot of use in general for gaming! There might be a future where game assets are generated on the fly, dialogue and storylines are without artificial limits, no invisible borders in game worlds. The technology is useful, but not in the way those fools want to force it.
Yes, images where not every pixel is important. NPC-s going about their business. The traffic. The weather. Games will use it, I’m sure of it.
Fair, but compare that to the fun of an actual in-person TTRPG. It’s the main way I make new friends as an adult man.
It’s like Facebook’s squandering tens of billions of dollars on the Metaverse even though nobody asked for it or wants it. Ultimately they had to give up on it, and the same thing will happen here.
I hereby revoke my permission.
“Mommy and daddy gubberment pls help, the CONSUMERS hate my product.”
redlemace@lemmy.world 11 hours ago
To be honest, I did tried a couple of AI’s. But all I got where solutions that would never work on the stated hardware. Code full of errors and when fixed never functions as requested. On any non-technical questions it’s always agreeing and hardly (not at all actually) challenging any input you give it. So yeah, i’m done with it and waiting for the bubble to burst.
utopiah@lemmy.world 10 hours ago
Sorry buddy but you are not “smart enough” to use that super powerful tool that supposedly can do everything extremely convenient for you! /s
Honytawk@feddit.nl 7 hours ago
While I agree with you, it sounds like you have only tried LLMs back in the day. They have become a lot better in recent times.
Especially when you want code, the differences are stark between an old LLM and a recent programming optimized LLM like Claude.