I had a professor in college that said when an AI problem is solved, it is no longer AI.
Computers do all sorts of things today that 30 years ago were the stuff of science fiction. Back then many of those things were considered to be in the realm of AI. Now they’re just tools we use without thinking about them.
I’m sitting here using gesture typing on my phone to enter these words. The computer is analyzing my motions and predicting what words I want to type based on a statistical likelihood of what comes next from the group of possible words that my gesture could be. This would have been the realm of AI once, but now it’s just the keyboard app on my phone.
brucethemoose@lemmy.world 3 weeks ago
As a fervent AI enthusiast, I disagree.
…I’d say it’s 97% hype and marketing.
It’s crazy how much fud is flying around, and legitimately buries good open research. It’s also crazy what these giant corporations are saying what they’re going to. TSMC’s allegedly calling Sam Altman a podcast bro is spot on, and I’d add “manipulative vampire” to that.
Talk to any long-time resident of localllama and similar “local” AI communities who actually dig into this stuff, and you’ll find lots of healthy skepticism, not the crypto-like AI bros like you find on linkedin, twitter and such and blot everything out.
falkerie71@sh.itjust.works 3 weeks ago
For real. Being a software engineer with basic knowledge in ML, I’m just sick of companies from every industry being so desperate to cling onto the hype train they’re willing to label anything with AI, even if it has little or nothing to do with it, just to boost their stock value. I would be so uncomfortable being an employee having to do this.
Mikelius@lemmy.world 3 weeks ago
For sure, it seems like 90% of ai startups are nothing more than front end wrappers for a gpt instance.
Badland9085@lemm.ee [bot] 3 weeks ago
As someone who was working really hard trying to get my company to be able use some classical ML (with very limited amounts of data), with some knowledge on how AI works, and just generally want to do some cool math stuff at work, being asked incessantly to shove AI into any problem that our execs think are “good sells” and be pressured to think about how we can “use AI” was a terrible feel. They now think my work is insufficient and has been tightening the noose on my team.
Blackmist@feddit.uk 3 weeks ago
TSMC are probably making more money than anyone in this goldrush by selling the shovels and picks, so if that’s their opinion, I feel people should listen…
There’s little in the AI business plan other than hurling money at it and hoping job losses ensue.
brucethemoose@lemmy.world 3 weeks ago
TSMC doesn’t really have official opinions, they take silicon orders for money and shrug happily. Being neutral is good for business.
Altman’s scheme is just a whole other level of crazy though.
conciselyverbose@sh.itjust.works 3 weeks ago
Seriously, I’d love to be enthusiastic about it because it’s genuinely cool what you can do with math.
But the lies that are shoved in our faces are just so fucking much and so fucking egregious that it’s pretty much impossible.
And on top of that LLMs are hugely overshadowing actual interesting approaches for funding.
WoodScientist@lemmy.world 3 weeks ago
I think we should indict Sam Altman on two sets of charges:
A set of securities fraud charges.
8 billion counts of criminal reckless endangerment.
He’s out on podcasts constantly saying the OpenAI is near superintelligent AGI and that there’s a good chance that they won’t be able to control it, and that human survival is at risk. How is gambling with human extinction not a massive act of planetary-scale criminal reckless endangerment?
So either he is putting the entire planet at risk, or he is lying through his teeth about how far along OpenAI is. If he’s telling the truth, he’s endangering us all. If he’s lying, then he’s committing securities fraud in an attempt to defraud shareholders. Either way, he should be in prison. I say we indict him for both simultaneously and let the courts sort it out.
FlyingSquid@lemmy.world 3 weeks ago
“When you’re rich, they let you do it.”
paddirn@lemmy.world 3 weeks ago
I really want to like AI, I’d love to have an intelligent AI assistant or something, but I just struggle to find any uses for it outside of some really niche cases or for basic brainstorming tasks. Otherwise, it just feels like alot of work for very little benefit or results that I can’t even trust or use.
brucethemoose@lemmy.world 3 weeks ago
I dunno about that.
I keep Qwen 32B loaded on my desktop pretty much whenever its on, as an (unreliable) assistant to analyze or parse big texts, to do quick chores, to bounce ideas off of or even as a offline replacement for google translate (though I specifically use aya 32B for that)
dan@upvote.au 3 weeks ago
I receive alerts when people are outside my house, using security cameras, Blue Iris, CodeProject AI, Node-RED and Home Assistant, using a Google Coral for local AI. That’s a good use case for AI.
just_an_average_joe@lemmy.dbzer0.com 3 weeks ago
The saddest part is, this is going to cause yet another AI winter. The first few ones were caused by genuine over-enthusiasm but this one is purely fuelled by greed.
sploosh@lemmy.world 3 weeks ago
The AI ecosystem is flooded, we need a good bubble pop to slow down the massive waste of resources that our current info-remix-based-on-what-you-will-likely-react-positively-to shit-tier AI represents.
tacosanonymous@lemm.ee 3 weeks ago
Agreed that’s why it’s so dangerous. These tech bros are going to do damage with their shitty products. It seems like it’s Altman’s goal, honestly.
just_an_average_joe@lemmy.dbzer0.com 3 weeks ago
He wants money/power, and he is getting it. The rest of the AI field will forever be haunted by his greed.
Valmond@lemmy.world 3 weeks ago
Ya, it’s like machine learning but better. That’s about it IMO.
brucethemoose@lemmy.world 3 weeks ago
I mean… it is machine learning.
asexualchangeling@lemmy.ml 3 weeks ago
That’s like saying breathing is like turning oxygen into carbon dioxide but better…
KSPAtlas@sopuli.xyz 3 weeks ago
After getting my head around the basics of the way LLMs work I thought “people rely on this for information?”, the model seems ok for tasks like summarisation though
brbposting@sh.itjust.works 3 weeks ago
I don’t love it for summarization. If I read a summary, my takeaway may be inaccurate.
Brainstorming is incredible. And revision suggestions. And drafting tedious responses, reformatting, parsing.
In all cases, nothing gets attributed to me unless I read every word and am in a position to verify the output. And I internalize nothing directly, besides philosophy or something. Sure can be an amazing starting point especially compared to a blank page.
dan@upvote.au 3 weeks ago
It’s good for coding if you train it on your own code base. Not for very complex code, but it’s great for common patterns and straightforward questions specific to your code base (eg “how do I load a user’s most recent order given their email address?”)
brucethemoose@lemmy.world 3 weeks ago
That and retrieval and the business use cases so far, but even then only if the results can be wrong somewhat frequently.
Evotech@lemmy.world 3 weeks ago
It’s selling the future, but nobody knows if we can actually get there
brucethemoose@lemmy.world 3 weeks ago
It’s selling an anticompetitive dystopia. It’s selling a Facebook monopoly vs selling the Fediverse.
We dont need 7 trillion dollars of datacenters burning the Earth, we need collaborative, open source innovation.
ininewcrow@lemmy.ca 3 weeks ago
The first part is true … no one cares about the second part of your statement.
Damage@feddit.it 3 weeks ago
What’s the source for that? It sounds hilarious
brucethemoose@lemmy.world 3 weeks ago
web.archive.org/…/openai-plan-electricity.html
billwashere@lemmy.world 3 weeks ago
Yep the current iteration is. But should we cross the threshold to full AGI… that’s either gonna be awesome or world ending. Not sure which.
brucethemoose@lemmy.world 3 weeks ago
Current LLMs cannot be AGI, no matter how big they are. The architecture just isn’t right.
Naz@sh.itjust.works 3 weeks ago
Based on what I’ve witnessed so far, people will play with their AGI units for a bit and then put them down to continue scrolling memes.
Which means it is neither awesome, nor world-ending, but just boring/business as usual.
Damage@feddit.it 3 weeks ago
I know nothing about anything, but I unfoundedly believe we’re still very far away from the computing power required for that. I think we still underestimate the power of biological brains.
merc@sh.itjust.works 3 weeks ago
What makes you think there’s a threshold?