Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.
in other words please help us, use our AI
Submitted 1 day ago by throws_lemy@reddthat.com to technology@lemmy.world
Workers should learn AI skills and companies should use it because it’s a “cognitive amplifier,” claims Satya Nadella.
in other words please help us, use our AI
AI is the new 3dTV
It’s way worse that 3Dtv.
Yes, 3d tv was pushed too soon. If they waited for the glassless technology (like the 3ds screen for example) I think we would have 3d screen everywhere. Now the tech is dead because people had a really bad perception of 3d tv.
Hey, I like my 3D TV. Every once in a while I manage to find a pirated video that’s in 3D and it’s pretty neat. And unlike the current avalanche of generative/LLM bullshit, I can turn the 3D off, and when I do it works just fine as a perfectly ordinary TV, and in no way does it nag me incessantly to turn it back on.
Hey, don’t be mean to 3DTV. At least there’s an actual use case for it. Watching 3D TV or movies, which aren’t actually that popular… Hmmm, I see your point but also counter with the 3D TVs are at least also regular TVs
Do something useful
What do you mean, that using ChatGPT for a recipe for eggs, sunny side up without any seasoning or toppings and burning up the electricity of a moderate household for a week with my query isn’t useful?
Allrecipes has you covered.
No. No, it really doesn’t.
I want a vegan recipe that uses turbinado sugar. I get 3 articles and only one of them is a recipe. If I don’t like that recipe…too bad. That’s what they have.
It’s not the query that burns through electricity like crazy, it’s training the models.
You can run a query yourself at home with a desktop computer, as long as it has enough RAM and compute cells to support the model you’re using (think a few high-end GPUs).
Training a model requires a huge pile of computer power though, and the AI companies are constantly scraping the internet to stealfind more training material
Dunno if that’s true or not. Generally, much more compute is used in inference than training, since you only train once, then use that model for millions of queries or whatever. However, some of these AI companies may be training many models constantly to one-up each-other and pump their stock; dunno. The “thinking” model paradigm is also transferring a lot more compute to inference. IIRC OpenAI spent $300k of compute just for inference to complete a single benchmark a few months ago (and found that, like training, exponentially increasing amounts of compute are needed for small gains in performance).
The number of adults I know who ask ChatGPT for recipes is non-zero.
Teenagers use it like it’s a search engine. They don’t understand the difference.
I got deepseek to run short roleplaying adventures that are surprisingly fun and engaging. It’s an amped up choose your own adventur, so for this application, the future is bright.
Not a single other llm can do this in any way approaching acceptable.
And it still lies and makes shit up, but in a fantasy world, the can let it pass unless it is trying to rob me of experience lol.
When it can do long sessions and entire careers instead of detailed one offs it’ll have found its niche for me. Right now, it’s just a fun toy, prone to hallucinations.
I can’t believe people use these things for code…
Right now, it’s just a fun toy, prone to hallucinations.
That’s the thing though - with an LLM, it’s all “hallucinations”. They’re just usually close to reality, and are presented with an authoritative, friendly voice.
(Or, in your case, they’re usually close to the established game reality!)
This is the thing I hope people learn about LLMs, it’s all hallucinations.
When an LLM has excellent data from multiple sources to answer your question, it is likely to give a correct answer. But, that answer is still a hallucination. It’s dreaming up a sequence of words that is likely to follow the previous words. It’s more likely go give an “incorrect” hallucination when the data is contradictory or vague. But, the process is identical. It’s just trying to dream up a likely series of words.
I can also see a lot of use in general for gaming! There might be a future where game assets are generated on the fly, dialogue and storylines are without artificial limits, no invisible borders in game worlds. The technology is useful, but not in the way those fools want to force it.
Yes, images where not every pixel is important. NPC-s going about their business. The traffic. The weather. Games will use it, I’m sure of it.
Fair, but compare that to the fun of an actual in-person TTRPG. It’s the main way I make new friends as an adult man.
I hereby revoke my permission.
It’s like Facebook’s squandering tens of billions of dollars on the Metaverse even though nobody asked for it or wants it. Ultimately they had to give up on it, and the same thing will happen here.
Buddy, I hope you will lose social permission to keep your head attached to your body. All your heads on spikes, is what I wish for.
“Mommy and daddy gubberment pls help, the CONSUMERS hate my product.”
CEOs aren’t people. That’s why they lobbied to have companies recognized as people. Stop giving them a stage.
How about we train them on killing billionaires?
just need to get rid of jensen, and it will all collapse.
Can we not all just log into chat gpt at the same time and ask it to count up to a googleplex sequentially in one second intervals?
“We have to find a compelling use case so we can keep tragedying the commons!”
My sole use for AI has been troubleshooting computer issues. I will say that AI often does a better job than first line tech support when prompted correctly. That being said, I am a tech nerd who knows more than the average computer user that is not in the IT industry. There will always be a reason for human tech support tiers for people who cannot prompt AI correctly, but still need their stuff to work. I personally don’t want AI invading my life any further.
That’s exactly the reason why you get good results when prompting a chatbot. You have the knowledge to ask the right questions with the needed keywords and lingo. What’s problematic is that Microslop and big tech in general are advertising AI as a generic tool for everyone and their grandmother. The result is garbage in, garbage out. It’s not going to work as advertised.
“Microsoft CEO begs for us to use the software that he’s been shoving down our throats for the last 10 years or so or else his corporation will lose money”
when they asked permissions?
What does Capitalism™ say about “innovations” that can’t deliver results? Filtering out crap that only works on some bullshit paper is the one thing capitalism is supposed to be good at.
Capitalism says that the market won’t reward people making those things, and the companies might fail as a result.
But, we’re no longer in a capitalist world. We’re in a corporatist world where it’s closer to technolfeudalism where it doesn’t really matter how bad your idea is, because you aren’t out to make a profit, instead you’re out to extract rent.
Citizen developers of the world, disband!!
Brother, AI has proven multiple times to make you stupider. It’s not a cognitive amplifier.
No
So the bubble’s finally going to burst, then?
He has my permission to stop.
Microsoft CEO warns that we must ‘do something useful’ with AI or they’ll lose ‘social permission’ to burn electricity on it
<Insert AI generated video of Microsoft CEO dancing around with willies on his head>
Must do something useful? You’re the one selling the damn thing. You can’t build a Pinto and then tell people “we have to stop burning to death or we’ll lose permission to keep production faulty cars.”
There is something inherently wrong with your product, and you can’t even fix it because you’re too busy shoving it down everyone’s throats.
It’s like you’re trying to bake cookies using pieces of every plagiarized baking recipe, whether or not they’re related. Then, before you’ve actually tasted the cookies, you’re telling everyone to reach into the oven and try using this “basic” cookie to modify and make their own cookies.
Except the cookies haven’t even baked yet. And before you’ve ever tasted a single fully baked cookie, you’re announcing modifications to your cookie dough recipe based on feedback from your previously undercooked, improperly made cookies.
Go back to small scale. Let people bake their own cookies at home, and report what they’ve discovered. Try upscaling those recipes, and see if you can make any parts more efficient.
And quit telling people to eat your tainted cookies that are poisoning everyone, and then telling them that if they don’t start enjoying your cookies soon, then you’re gonna have to shut down your factory.
Your cookie/Pinto/AI venture deserves to be shut down. Take the L, learn from it, and try again after you figure out how to get it right. Bake a better cookie instead of trying to make better consumers.
It’s getting more and more absurd.
“We can’t think of a good use for this parasite outside of our industry.”
Yeah, cause its totally not end-stage capitalism to invest a trillion dollars into something and THEN figure out what its for.
Play with algorithms and datasets if you want, but make it efficient. We don’t need thousands of data centers guzzling water and electricity and disturbing the peace just to generate wrong answers and slop. Work on the algorithms, don’t just scale up the slop.
Microsoft angle is that they can run their AI on your documents and files (it’s all on OneDrive now remember?) and “know” about you and the world as a whole collectively at all times. The panopticon wet dream of advertisers and governments alike. Plus hardware will be too expensive for plebs and we’ll all have scaled back dumb terminal tablets that connect to Microsoft Azure Copilot Windows for $49.99/month
They have nothing consumers want.
Maybe they should look into selling AI CP since it seems to be great at geberating that shit
But only to protect the children™ of course
They’ll spin it with some BS like “if they’re looking at our generations they won’t touch real children” or some shit like that
That might be the only off-ramp.
“The streets are extended gutters and the gutters are full of blood and when the drains finally scab over, all the vermin will drown. The accumulated filth of all their sex and murder will foam up about their waists and all the whores and politicians will look up and shout ‘SAVE US!’…and I’ll look down and whisper 'No.”
I have a use for it. Put it in the recycle bin.
Bebopalouie@lemmy.ca 15 hours ago
I know something useful that can be done with AI in its current form. Toss it in the fucking garbage maybe.
JasonDJ@lemmy.zip 15 hours ago
On the one hand, I get it. I really do. It takes an absurd amount of resources for what it does.
On the other hand, I wonder if people said the same of early generation comptuers. UNIVAC used tubes of mercury for RAM and consumed 125KW of electricity to process a whopping 2k operations per second.
Probably not. Most people weren’t aware of it, nor did they have a care for power consumption, water consumption, etc. We were in peak-American Exceptionalism in the post-war era.
But, had they, and computers kinda just…died. Right there, in the 1950s. Would we have gone to the moon? Would we have HDTV? iPhones? Social Media? A treacherous imbecile in charge of the most powerful military the world has ever seen?
Probably not.
So…I do worry about the consumption, and the ecological and environmental impact. But, what if that is a necessary evil for the continued evolution of technology, and with it, society? And, if it is, do we want that?
BlackDragon@slrpnk.net 13 hours ago
LLMs are dead end tech which is only useful for people who want to do unethical shit. They’re good at lying, making up nonsense, sounding like humans, facilitating scams, and misleading people. No matter how much time and energy is spent developing them, that’s all they’ll ever be good at. They can get better at doing those things, but they’ll never be good at anything actually useful because of the fact that there is no internal logic going on in them. When it tells you the moon is made of various kinds of rock, the exact same thing is happening as when it tells you the moon is made of cheese and bread. It has no way of distinguishing between these two statements. All of its ‘ideas’ are vapor, an illusion, smoke and mirrors. It doesn’t “understand” anything it’s saying, all it does is generate text that looks like something someone who does understand language would say. There is no logic in the background and there cannot be.
oyenyaaow@lemmy.zip 14 hours ago
Image
(www.computerhistory.org/revolution/…/83)
early generation computers fueled a demand that was being supplied by rooms and rooms of human calculators calculating and checking each other’s works for scientists, engineers, businesses, and government agencies
Image
(Manhattan Project, Atomic Heritage Foundation picture)
they would not have died out, because they were a necessary part of the evolution of technology at their time. more importantly, they were more accurate than their human calculators. computers don’t forget to carry a number to the next digit or flip them around. barring exceptionally rare cosmic radiation events. and their technological progression fueled an ever greater need until now when tech has entered post-scarcity when it comes to calculating power.
generative AI in contrast was an offering looking for a purpose. spare gigaflops no longer needed for tech people are trying to sell by building more and more hype for calculating power. sucks to be the one who invests into it, but that’s business. sometimes investment don’t work out. if microsoft can’t hype up a demand then it is unnecessary technology.
JTode@lemmy.world 7 hours ago
Those old computers you speak of: They worked. There is no comparison to be made here.
They were built in order to give us an edge on the battlefield. More accurate artillery and the like. They did math which humans could do, but which would take humans weeks or months, and the answers were required within timeframes more like 12 hours, because war.
They were so useful, so valuable, that they were worth the treasure spent. They conferred a kind of superintelligence to their users. Those with brains to understand could see this, and so yes, hobbyists found their way to building their own machines, once small CPUs became available, however janky. Anyone who had to do math, who had to do math, went into debt if they had to, and learned to use these janky beasts because the advantage was weeks or months of time they didn’t have to grind on paper.
There is nothing about AI that resembles any of that.