Comment on Google's Agentic AI wipes user's entire HDD without permission in catastrophic failure
Hawanja@lemmy.world 17 hours ago
Yet another reason to not use any of this AI bullshit
Comment on Google's Agentic AI wipes user's entire HDD without permission in catastrophic failure
Hawanja@lemmy.world 17 hours ago
Yet another reason to not use any of this AI bullshit
nutsack@lemmy.dbzer0.com 6 hours ago
every company ive interviewed with in the last year wants experience with these tools
Nalivai@lemmy.world 2 hours ago
A year ago I was looking for a job, and by the end I had three similar job offers, and to decide I asked all of them do they use LLMs. Two said “yes very much so it’s the future ai is smarter than god”, and the third said “only if you really want, but nowhere where it matters”. I chose the third one. Two others are now bankrupt.
BluesF@lemmy.world 5 hours ago
The company I work for (we make scientific instruments mostly) has been pushing hard to get us to use AI literally anywhere we can. Every time you talk to IT about a project they come back with 10 proposals for how to add AI to it. It’s a nightmare.
I got an email from a supplier today that acknowledged that “76% of CFOs believe AI will be a game-changer, [but] 86% say it still hasn’t delivered mean value. Ths issue isn’t the technology-it’s the foundation it’s built on.”
Like, come on, no it isn’t. The technology is not ready for the kind of applications it’s being used for. It makes a half decent search engine alternative, if you’re OK with taking care not to trust every word it says it can be quite good at identifying things from descriptions and finding obscure stuf… But otherwise until the hallucination problem is solved it’s just not ready for large scale use.
mirshafie@europe.pub 5 hours ago
I think you’re underselling it a bit though. It is far better than a modern search engine, although that is in part because of all of the SEO slop that Google has ingested. The fact that you need to think critically is not something new and it’s never going to go away either. If you were paying real-life human experts to answer your every question you would still need to think for yourself.
Still, I think the C-suite doesn’t really have a good grasp of the limits of LLMs. This could be partly because they themselves work a lot with words and visualization, areas where LLMs show promise. It’s much less useful if you’re in engineering, although I think ultimately AI will transform engineering too. It is of course annoying and potentially destructive that they’re trying to force-push it into areas where it’s not useful (yet).
Nalivai@lemmy.world 2 hours ago
Very much disagree with that. Google got significantly worse, but LLM results are worse still. You do need to think critically about it, but with LLM blurb there is no ways to check for validity other than to do another search without LLM, to find sources, and in this case why even bother with the generator in the first place, or accept that some of your new info can be incorrect, and you don’t know which part.
With conventional search you have all the context of your result, you have the reputation of the website itself, you have the info about who wrote the article or whatever, you have the tone of article, you have comments, you have all the subtle clues that we learnt to pick up on both from our lifetime experience on the internet, and civilisational span experience with human interaction. With the generator you have zero of that, you have something that is stated as fact, and everything has the same weight and the same validity, and even when it sites sources, those can be just outright lies.
trannus_aran@lemmy.blahaj.zone 5 hours ago
Yeah, because the market is run by morons and all anyone wants to do is get the stock price up long enough for them to get a good bonus and cache out after the quarter. It’s pretty telling that these tools still haven’t generated a profit yet