game devs gonna have to use different language to describe what used to be simply called “enemy AI” where exactly zero machine learning is involved
Linus Torvalds reckons AI is ‘90% marketing and 10% reality’
Submitted 1 year ago by vegeta@lemmy.world to technology@lemmy.world
Comments
noxy@yiffit.net 1 year ago
spookedintownsville@lemmy.world 1 year ago
CPU
Hackworth@lemmy.world 1 year ago
Logic and Path-finding?
NABDad@lemmy.world 1 year ago
I had a professor in college that said when an AI problem is solved, it is no longer AI.
Computers do all sorts of things today that 30 years ago were the stuff of science fiction. Back then many of those things were considered to be in the realm of AI. Now they’re just tools we use without thinking about them.
I’m sitting here using gesture typing on my phone to enter these words. The computer is analyzing my motions and predicting what words I want to type based on a statistical likelihood of what comes next from the group of possible words that my gesture could be. This would have been the realm of AI once, but now it’s just the keyboard app on my phone.
marzhall@lemmy.world 1 year ago
designatedhacker@lemm.ee 1 year ago
The approach of LLMs without some sort of symbolic reasoning layer aren’t actually able to hold a model of what their context is and their relationships. They predict the next token, but fall apart when you change the numbers in a problem or add some negation to the prompt.
Awesome for protein research, summarization, speech recognition, speech generation, deep fakes, spam creation, RAG document summary, brainstorming, content classification, etc. I don’t even think we’ve found all the patterns they’d be great at predicting.
There are tons of great uses, but just throwing more data, memory, compute, and power at transformers is likely to hit a wall without new models. All the AGI hype is a bit overblown. That’s not from me that’s Noam Chomsky youtu.be/axuGfh4UR9Q?t=9271.
NABDad@lemmy.world 1 year ago
I’ve often thought LLMs could replace all of the C-suites and upper and middle management.
Funny how no companies push that as a possibility.
Zip2@feddit.uk 1 year ago
Oh please. Wait until they release 128bit AI quantum blockchain.
nobleshift@lemmy.world 1 year ago
Zink@programming.dev 1 year ago
I feel like they snuck in a little square of reasonable terms with
Best practices Optimization Industry standard Authenticate
But now that I’ve typed it, I’m scared that optimization and authenticate have gross business-speak definitions I just don’t know about yet.
nobleshift@lemmy.world 1 year ago
lurch@sh.itjust.works 1 year ago
It will have revolutionary rock star synergy
Zip2@feddit.uk 1 year ago
The blue sky, out of the box thinking will disrupt the glass ceiling.
pHr34kY@lemmy.world 1 year ago
I’m waiting for the part that it gets used for things that are not lazy, manipulative and dishonest. Until then, I’m sitting it out like Linus.
Kusimulkku@lemm.ee 1 year ago
I’m waiting for the part that it gets used for things that are not lazy
Replacing menial or boring tasks is like 90% of what I’m hoping from it.
SkyeStarfall@lemmy.blahaj.zone 1 year ago
AI has been used for these things for decades, they are just in the background and not noticed by laypeople
Though the biggest issue is that when people say “AI” today, they mean specifically LLMs, but the world of AI is so much larger than that
Z3k3@lemmy.world 1 year ago
This is where I’m at. The push right now has nft pump and dump energy.
The moment someone says ai to me right now I auto disengage. When the dust settles, I’ll look at it seriously.
Grandwolf319@sh.itjust.works 1 year ago
And then people will complain about that saying it’s almost all hype and no substance.
Then that one tech bro will keep insisting that lemmy is being unfair to AI and there are so many good use cases.
No one is denying the 10% use cases, we just don’t think it’s special or needs extra attention since those use cases already had other possible algorithmic solutions.
Tech bros need to realize, even if there are some use cases for AI, there has not been any revolution, stop trying to make it happen and enjoy your new slightly better tool in silence.
cybersandwich@lemmy.world 1 year ago
Hi! It’s me, the guy you discussed this with the other day! The guy that said Lemmy is full of AI wet blankets.
I am 100% with Linus AND would say the 10% good use cases can be transformative.
Since there isn’t any room for nuance on the Internet, my comment seemed to ruffle feathers. There are definitely some folks out there that act like ALL AI is worthless and LLMs specifically have no value. I provided a list of use cases that I use pretty frequently where it can add value. (Then folks started picking it apart with strawmen).
I gotta say though this wave of AI tech feels different. It reminds me of the early days of the web/computing in the late 90s early 2000s. Where it’s fun, exciting, and people are doing all sorts of weird,quirky shit with it, and it’s not even close to perfect. It breaks a lot and has limitations but their is something there. There is a lot of promise.
Like I said else where, it ain’t replacing humans any time soon, we won’t have AGI for decades, and it’s not solving world hunger. That’s all hype bro bullshit. But there is actual value here.
Grandwolf319@sh.itjust.works 1 year ago
Hi! It’s me, the guy you discussed this with the other day! The guy that said Lemmy is full of AI wet blankets.
Omg you found me in another post. I’m not even mad; I do like how passionate you are about things.
Since there isn’t any room for nuance on the Internet, my comment seemed to ruffle feathers. There are definitely some folks out there that act like ALL AI is worthless and LLMs specifically have no value. I provided a list of use cases that I use pretty frequently where it can add value. (Then folks started picking it apart with strawmen).
What you’re talking about is polarization and yeah, it’s a big issue.
This is a good example, I never did any strawman nor disagree with the fact that it can be useful in some shape or form. I was trying to say its value is much much lower than what people claim to be.
But that’s the issue with polarization, me saying there is much less value can be interpreted as absolute zero, and I apologize for contributing to the polarization.
FartsWithAnAccent@lemmy.world 1 year ago
Seems generous
brucethemoose@lemmy.world 1 year ago
As a fervent AI enthusiast, I disagree.
…I’d say it’s 97% hype and marketing.
It’s crazy how much fud is flying around, and legitimately buries good open research. It’s also crazy what these giant corporations are saying what they’re going to. TSMC’s allegedly calling Sam Altman a podcast bro is spot on, and I’d add “manipulative vampire” to that.
Talk to any long-time resident of localllama and similar “local” AI communities who actually dig into this stuff, and you’ll find lots of healthy skepticism, not the crypto-like AI bros like you find on linkedin, twitter and such and blot everything out.
WoodScientist@lemmy.world 1 year ago
I think we should indict Sam Altman on two sets of charges:
-
A set of securities fraud charges.
-
8 billion counts of criminal reckless endangerment.
He’s out on podcasts constantly saying the OpenAI is near superintelligent AGI and that there’s a good chance that they won’t be able to control it, and that human survival is at risk. How is gambling with human extinction not a massive act of planetary-scale criminal reckless endangerment?
So either he is putting the entire planet at risk, or he is lying through his teeth about how far along OpenAI is. If he’s telling the truth, he’s endangering us all. If he’s lying, then he’s committing securities fraud in an attempt to defraud shareholders. Either way, he should be in prison. I say we indict him for both simultaneously and let the courts sort it out.
FlyingSquid@lemmy.world 1 year ago
“When you’re rich, they let you do it.”
-
just_an_average_joe@lemmy.dbzer0.com 1 year ago
The saddest part is, this is going to cause yet another AI winter. The first few ones were caused by genuine over-enthusiasm but this one is purely fuelled by greed.
sploosh@lemmy.world 1 year ago
The AI ecosystem is flooded, we need a good bubble pop to slow down the massive waste of resources that our current info-remix-based-on-what-you-will-likely-react-positively-to shit-tier AI represents.
tacosanonymous@lemm.ee 1 year ago
Agreed that’s why it’s so dangerous. These tech bros are going to do damage with their shitty products. It seems like it’s Altman’s goal, honestly.
just_an_average_joe@lemmy.dbzer0.com 1 year ago
He wants money/power, and he is getting it. The rest of the AI field will forever be haunted by his greed.
Blackmist@feddit.uk 1 year ago
TSMC are probably making more money than anyone in this goldrush by selling the shovels and picks, so if that’s their opinion, I feel people should listen…
There’s little in the AI business plan other than hurling money at it and hoping job losses ensue.
brucethemoose@lemmy.world 1 year ago
TSMC doesn’t really have official opinions, they take silicon orders for money and shrug happily. Being neutral is good for business.
Altman’s scheme is just a whole other level of crazy though.
KSPAtlas@sopuli.xyz 1 year ago
After getting my head around the basics of the way LLMs work I thought “people rely on this for information?”, the model seems ok for tasks like summarisation though
brbposting@sh.itjust.works 1 year ago
I don’t love it for summarization. If I read a summary, my takeaway may be inaccurate.
Brainstorming is incredible. And revision suggestions. And drafting tedious responses, reformatting, parsing.
In all cases, nothing gets attributed to me unless I read every word and am in a position to verify the output. And I internalize nothing directly, besides philosophy or something. Sure can be an amazing starting point especially compared to a blank page.
dan@upvote.au 1 year ago
It’s good for coding if you train it on your own code base. Not for very complex code, but it’s great for common patterns and straightforward questions specific to your code base (eg “how do I load a user’s most recent order given their email address?”)
brucethemoose@lemmy.world 1 year ago
the model seems ok for tasks like summarisation though
That and retrieval and the business use cases so far, but even then only if the results can be wrong somewhat frequently.
Damage@feddit.it 1 year ago
TSMC’s allegedly calling Sam Altman a ‘podcast bro’ is spot on, and I’d add “manipulative vampire” to that.
What’s the source for that? It sounds hilarious
brucethemoose@lemmy.world 1 year ago
web.archive.org/…/openai-plan-electricity.html
When Mr. Altman visited TSMC’s headquarters in Taiwan shortly after he started his fund-raising effort, he told its executives that it would take $7 trillion and many years to build 36 semiconductor plants and additional data centers to fulfill his vision, two people briefed on the conversation said. It was his first visit to one of the multibillion-dollar plants.
TSMC’s executives found the idea so absurd that they took to calling Mr. Altman a “podcasting bro,” one of these people said. Adding just a few more chip-making plants, much less 36, was incredibly risky because of the money involved.
paddirn@lemmy.world 1 year ago
I really want to like AI, I’d love to have an intelligent AI assistant or something, but I just struggle to find any uses for it outside of some really niche cases or for basic brainstorming tasks. Otherwise, it just feels like alot of work for very little benefit or results that I can’t even trust or use.
dan@upvote.au 1 year ago
I receive alerts when people are outside my house, using security cameras, Blue Iris, CodeProject AI, Node-RED and Home Assistant, using a Google Coral for local AI. That’s a good use case for AI.
brucethemoose@lemmy.world 1 year ago
I dunno about that.
I keep Qwen 32B loaded on my desktop pretty much whenever its on, as an (unreliable) assistant to analyze or parse big texts, to do quick chores, to bounce ideas off of or even as a offline replacement for google translate (though I specifically use aya 32B for that)
billwashere@lemmy.world 1 year ago
Yep the current iteration is. But should we cross the threshold to full AGI… that’s either gonna be awesome or world ending. Not sure which.
merc@sh.itjust.works 1 year ago
What makes you think there’s a threshold?
brucethemoose@lemmy.world 1 year ago
Current LLMs cannot be AGI, no matter how big they are. The architecture just isn’t right.
Damage@feddit.it 1 year ago
I know nothing about anything, but I unfoundedly believe we’re still very far away from the computing power required for that. I think we still underestimate the power of biological brains.
Naz@sh.itjust.works 1 year ago
Based on what I’ve witnessed so far, people will play with their AGI units for a bit and then put them down to continue scrolling memes.
Which means it is neither awesome, nor world-ending, but just boring/business as usual.
Evotech@lemmy.world 1 year ago
It’s selling the future, but nobody knows if we can actually get there
brucethemoose@lemmy.world 1 year ago
It’s selling an anticompetitive dystopia. It’s selling a Facebook monopoly vs selling the Fediverse.
We dont need 7 trillion dollars of datacenters burning the Earth, we need collaborative, open source innovation.
ininewcrow@lemmy.ca 1 year ago
The first part is true … no one cares about the second part of your statement.
conciselyverbose@sh.itjust.works 1 year ago
Seriously, I’d love to be enthusiastic about it because it’s genuinely cool what you can do with math.
But the lies that are shoved in our faces are just so fucking much and so fucking egregious that it’s pretty much impossible.
And on top of that LLMs are hugely overshadowing actual interesting approaches for funding.
falkerie71@sh.itjust.works 1 year ago
For real. Being a software engineer with basic knowledge in ML, I’m just sick of companies from every industry being so desperate to cling onto the hype train they’re willing to label anything with AI, even if it has little or nothing to do with it, just to boost their stock value. I would be so uncomfortable being an employee having to do this.
Badland9085@lemm.ee [bot] 1 year ago
As someone who was working really hard trying to get my company to be able use some classical ML (with very limited amounts of data), with some knowledge on how AI works, and just generally want to do some cool math stuff at work, being asked incessantly to shove AI into any problem that our execs think are “good sells” and be pressured to think about how we can “use AI” was a terrible feel. They now think my work is insufficient and has been tightening the noose on my team.
Mikelius@lemmy.world 1 year ago
For sure, it seems like 90% of ai startups are nothing more than front end wrappers for a gpt instance.
Valmond@lemmy.world 1 year ago
Ya, it’s like machine learning but better. That’s about it IMO.
asexualchangeling@lemmy.ml 1 year ago
That’s like saying breathing is like turning oxygen into carbon dioxide but better…
narc0tic_bird@lemm.ee 1 year ago
Sounds about right. There are some valid and good use cases for “AI”, but the majority is just buzzword marketing.
billwashere@lemmy.world 1 year ago
kitnaht@lemmy.world 1 year ago
Honestly, he’s wrong though.
I know tons of full stack developers who use AI to GREATLY speed up their workflow. I’ve used AI image generators to put something I wanted into the concept stage before I paid an artist to do the work with the revisions I wanted that I couldn’t get AI to produce properly.
And first and foremost, they’re a great use in surfacing information that is discussed and available, but might be buried with no SEO behind it to surface it. They are terrible at deducing things themselves, because they can’t ‘think’, or coming up with solutions that others haven’t already - but so long as people are aware of those limitations, then they’re a pretty good tool to have.
antonim@lemmy.dbzer0.com 1 year ago
they’re a great use in surfacing information that is discussed and available, but might be buried with no SEO behind it to surface it
This is what I’ve seen many people claim. But it is a weak compliment for AI, and more of a criticism of the current web search engines. Why is that information unavailable to search engines, but is available to LLMs? If someone has put in the work to find and feed the quality content to LLMs, why couldn’t that same effort have been invested in Google Search?
kitnaht@lemmy.world 1 year ago
If someone has put in the work to find and feed the quality content to LLMs, why couldn’t that same effort have been invested in Google Search?
I’d rather a world where 10 companies can compete with google search on AIs, than where they dump money into a monopoly.
DacoTaco@lemmy.world 1 year ago
He isnt wrong. This comes from somebody who technically uses ai daily to help develop ( github copilot in visual studio to assist in code prediction based on the code base or the solution ), but AI is marketed even worse than blockchain back in 2017. Its everywhere, in every product, even if it doesnt have ai or has nothing to do with it.
There is shit like microsoft recall, apple intelligence, bing co pilot, office co pilot, …
All of those are just… Nothing.
There are also chatbots which bring nothing new to the table either.
Everyone and everything wants to market there stuff with ai and its disgusting.
Does that mean that current ai tech cant bring anything to the table? No, it totally can, but 90% of ai stuff out there is, just like linus says, marketing bullshit.index@sh.itjust.works 1 year ago
Half of the people here linus included must have never use stable diffusion
noodlejetski@lemm.ee 1 year ago
I know tons of full stack developers who use AI to GREATLY speed up their workflow.
GreenKnight23@lemmy.world 1 year ago
How dare you bring sources into this opinion!
Valmond@lemmy.world 1 year ago
If you are just blatantly copying art, well yeah you’re stealing it.
pimento64@sopuli.xyz 1 year ago
Let me guess. Dumped by an art girl and anxious about the $600 you invested?
DrSleepless@lemmy.world 1 year ago
Just like Furbys
TheImpressiveX@lemmy.ml 1 year ago
What happened to Linus? He looks so old now…
Lost_My_Mind@lemmy.world 1 year ago
So basically just like linux. Except linux has no marketing…So 10% reality, and 90% uhhhhhhhhhh…
A_Random_Idiot@lemmy.world 1 year ago
AI is nothing more than a way for big businesses to automate more work and fire more people.
and do that at the expense of 30+ years of power reduction and efficiency gains, to the point that private companies are literally buying/building/restarting old power plants just to cover the insane power demand, because literally operating a power plant is cheaper than paying the energy costs.
daniskarma@lemmy.dbzer0.com 1 year ago
All technology in human history has done that. What are you proposing? Reject technology to keep people employed on inefficient tasks?
At some point people need to start thinking that is better to end capitalism that to return to monke.