You must have done things wrong. These cases actually work extremely well. Like it or not.
Comment on AI spurs employees to work harder, faster, and with fewer breaks, study finds
stealth_cookies@lemmy.ca 18 hours agoI honestly used AI for something other than summarizing a meeting yesterday. It failed so miserably that I’m really not apt to use it again. Maybe I was wrong to assume it could summarize a simple graph into a table for me.
unnamed1@feddit.org 18 hours ago
Passerby6497@lemmy.world 11 hours ago
Yeah, after all, LLMs are known for their ability to do things correctly and not make up tons of random bullshit.
unnamed1@feddit.org 2 hours ago
This is too generalised of a statement. You could say the same about people. I could tell you about how well these things (notLLMs in general, I mean mature use cases) actually work at many companies I’ve seen it, but you still won’t accept it so what’s the point.
Passerby6497@lemmy.world 47 minutes ago
I would struggle to accept any statement that doesn’t match with my experience and the experience of the vast majority of the people I talk to about this.
Because you can tell me the sky is purple polka dots, but without evidence I’m not going to do more than listen to your experience.
ImgurRefugee114@reddthat.com 18 hours ago
AI has a lot of pitfalls. It helps knowing how they work: tokens, context, training, harnesses and tools,… Because then nonsense like this makes a lot more sense. (For context, I later told it to use JavaScript to manipulate strings to accomplish this task and it did a much better job. Still needed touchups of course)
Tollana1234567@lemmy.today 15 hours ago
i used it for the first time a few weeks ago, cant trust the results as they dont verify the actual sources where they get numbers/cost from. it was about an ACA plan.
brsrklf@jlai.lu 17 hours ago
A co-worker not long ago had AI (fucking copilot in this case) randomly trying to analyze a spreadsheet report with a list of users.
There wasn’t any specific need to do this right now, but, curious, he let it do its thing. The AI correctly identified it was a list of user accounts, and said it might be able to count them. Which would be ridiculously easy to do, since it’s just a correctly formatted spreadsheet with each row being one user.
So he says OK, count them for me. The AI apologizes, it can’t process the file because it’s too big to be passed fully as a parameter in a python script (OK, why and how are you doing that?) but says it might be able to process the list if it’s copy-pasted into a text file.
My co-worker is like, at that point, why fucking not? and does the thing. The AI still fails anyway and apologizes again.
We’re paying for that shit. Not specifically for copilot, but it was part of the package. Laughing at how it fails at simple tasks it set up for itself is slightly entertaining I guess, thanks Microsoft.
Jesus_666@lemmy.world 16 hours ago
Oh yeah, same here except with a self-hosted LLM. I had a log file with thousands of warnings and errors coming from several components. Major refactor of a codebase in the cleanup phase. I wanted to have those sorted by severity, component, and exception (if present). Nothing fancy.
So, hoping I could get a quick solution, I passed it to the LLM. It returned an error. Turns out that a 14 megabyte text file exceeds the context size. That server with several datacenter GPUs sure looks like a great investment now.
So I just threw together a script that applied a few regexes. That worked, no surprise.