Comment on AI boom could falter without wider adoption, Microsoft chief Satya Nadella warns
worhui@lemmy.world 18 hours ago
If he wanted people to like it then he should have made it do things people want it to do.
It is the new metaverse.
Comment on AI boom could falter without wider adoption, Microsoft chief Satya Nadella warns
worhui@lemmy.world 18 hours ago
If he wanted people to like it then he should have made it do things people want it to do.
It is the new metaverse.
CaptDust@sh.itjust.works 18 hours ago
Hell I’d almost settle for just “make it work”. No disclaimers, no bullshitting. Computers should be optimized and accurate. AI is neither.
worhui@lemmy.world 18 hours ago
Ai does work great, at some stuff. The problem is pushing it into places it doesn’t belong.
It’s a good grammar and spell check. It helps me get a lot of English looking more natural.
It’s also great for troubleshooting consumer electronics.
It’s far better at search than google.
Even then it can only help, not replace folks or complete tasks.
snooggums@piefed.world 17 hours ago
It only looks good in comparison to Google search because they trashed Google search.
AmbitiousProcess@piefed.social 17 hours ago
Which of course, Google did just so you’d have to search more, so you’d see more ads.
AmbitiousProcess@piefed.social 16 hours ago
I can generally agree with this, but I think a lot of people overestimate where it DOES belong.
For example, you’ll see a lot of tech bros talking about how AI is great at replacing artists, but a bunch of artists who know their shit can show you every possible way this just isn’t as good as human-made works, but those same artists might say that AI is still incredibly good at programming… because they’re not programmers.
Totally. After all, it’s built on a similar foundation to existing spellcheck systems: predict the likely next word. It’s good as a thesaurus too. (e.g. “what’s that word for someone who’s full of themselves, self-centered, and boastful?” and it’ll spit out “egocentric")
Only for very basic, common, or broad issues. LLMs generally sound very confident, and provide answers regardless of if there’s actually a strong source. Plus, they tend to ignore the context of where they source information from.
For example, if I ask it how to change X setting in a niche piece of software, it will often just make up an entire name for a setting or menu, because it just… has to say something that sounds right, since the previous text was “Absolutely! You can fix x by…” and it’s just predicting the most likely term, which isn’t going to be “wait, nevermind, sorry I don’t think that’s a setting that even exists!”, but a made up name instead. (this is one of the reasons why “thinking” versions of models perform better, because the internal dialogue can reasonably include a correction, retraction, or self-questioning)
It will pull from names and text of entirely different posts that happened to display on the page it scraped, make up words that never appeared on any page, or infer a meaning that doesn’t actually exist.
But if you have a more common question like “my computer is having x issues, what could this be?” it’ll probably give you a good broad list, and if you narrow it down to RAM issues, it’ll probably recommend you MemTest86.
As someone else already mentioned, this is mostly just because Google deliberately made search worse. Other search engines that haven’t enshittified, like the one I use (Kagi), tend to give much better results than Google, without you needing to use AI features at all.
On that note though, there is actually an interesting trend where AI models tend to pick lower-ranked, less SEO-optimized pages as sources, but still tend to pick ones with better information on average. It’s quite interesting, though I’m no expert on that in particular and couldn’t really tell you why other than “it can probably interpret the context of a page better than an algorithm made to do it as quickly as possible, at scale, returning 30 results in 0.3 seconds, given all the extra computing power and time.”
Agreed.
bridgeenjoyer@sh.itjust.works 15 hours ago
I find that people only think its good when using it for something they dont already know, so then they believe everything it says. Catch 22. When they use it for something they already know, its very easy to see how it lies and makes up shit because its a markov chain on steroids and is not impressive in any way. Those billions could have housed and fed every human in a starving country but instead we have the digital equivalent of funky pop minions.
I also find in daily life those who use it and brag about it are 95% of the time the most unintelligent people i know.
Note this doesnt apply to machine learning.
Wirlocke@lemmy.blahaj.zone 16 hours ago
Fundamentally due to it’s design, LLMs are digital duct tape.
The entire history of computer science has been making compromises between efficient machine code and human readable language. LLM’s solve this in a beautifully janky way, like duct tape.
But it’s ultimately still a compromise, you’ll never get machine accuracy from an LLM because it’s sole purpose is to fulfill the “human readable” part of that deal. So it’s applications are revolutionary in the same way as “how did you put together this car engine with only duct tape?” kind of way.
CaptDust@sh.itjust.works 17 hours ago
We’ll have to agree to disagree. To go through your points, spell check not particularly impressive. That was a solved previously without needing the power demands of a small town. Grammer, maybe - but in my experience my “LLM powered” keyboard’s suggestions are still worse than old T9.
I’ve had no luck troubleshooting anything with AI. It’s often trained on old data, tries to instruct you to change settings that don’t exist, or dreams up controls that appear on “similar” hardware. Sure you can infer a solution, maybe, but it’s rarely correct at first response. It’ll happily run you through steps that are inconsequential to fixing a problem.
It might be better than indexed search NOW - but mostly because LLMs wrecked that too. I used to be able to use a couple search operators and get directly to the information I needed - now search is just shifting through slop sites.
And it does all this half assing while using enough power to justify dedicated nuclear reactors. I cant help but feel we’ve regressed on so many fronts.