Imo the human laziness is the issue. Every thread where a lot of people chime in about ai, so many talking about how it’s useless because it’s wrong sometimes. It’s basically like people who use Wikipedia but can’t be bothered to cross reference… Except lazier. They literally expect a machine to be flawless because it seems confident or something?
Comment on Microsoft Copilot has been banned for use by US House staff members, at least for now
NounsAndWords@lemmy.world 7 months ago
Until we either solve the problem of LLMs providing false information or the problem of people being too lazy to fact check their work, this is probably the correct course of action.
TrickDacy@lemmy.world 7 months ago
Sylvartas@lemmy.world 7 months ago
I think you’re missing the point. I don’t like copilot/chat gpt for important stuff because if I have to double check their solutions I barely gained any time. Especially since it’s correct more often than not because it will make me complacent over enough time (the professors who were patient enough to actually explain why we shouldn’t be using Wikipedia as a primary source also used the same point which I thought made a lot of sense).
Daxtron2@startrek.website 7 months ago
You’re going to need to fact check any code you get online anyways, why not have it hyper specific to your current use case? If you’re a good developer, review does not take nearly as long as manual implementation
Sylvartas@lemmy.world 7 months ago
I very rarely grab code online because I work in videogames and it’s very hard to find good code for the things I struggle with since all the publicly available stuff is for hobbyists and thus usually very basic/unoptimized as hell
Limeey@lemmy.world 7 months ago
I can’t imagine using any LLM for anything factual. It’s useful for generating boilerplate and that’s basically it. Any time I try to get it to find errors in what I’ve written (either communication or code) it’s basically worthless.
Eyck_of_denesle@lemmy.zip 7 months ago
My little brother was using gpt for homework and he asked it the probability of extra Sunday in a leap year(52 weeks 2 days) and it said 3/8. One of the possible outcomes it listed was fkng Sunday, Sunday. I asked how two sundays can come consecutively and it made up a whole bunch of bs. The answer is so simple 2/7. The sources it listed also had the correct answer.
ForgotAboutDre@lemmy.world 7 months ago
All it does it create answers that sound like they might be correct. It has no working cognition. People that ask questions like that expect a conversation about probability and days in a year. All it does is combine the two, it can’t think about it.
QuaternionsRock@lemmy.world 7 months ago
Really? It spotted a missing
push_back
like 600 lines deep for me a few days ago. I’ve also had good success at getting it to spot missing semicolons that C++ compilers can’t because C++ is a stupid language.BrikoX@lemmy.zip 7 months ago
You can thank all open source developers for that by supporting them.
QuaternionsRock@lemmy.world 7 months ago
Huh?
AeroLemming@lemm.ee 7 months ago
ForgotAboutDre@lemmy.world 7 months ago
It’s probably just the novelty wearing off. People expected very little from it initially, then it got hyped up. This raised expectations. Combining the raised expectations with the memory of it exceeding expectations will let you see all the flaws.
Wizard_Pope@lemmy.world 7 months ago
I find it useful for quickly reformating smaller sample sizes of tables and similar for my reports. It’s often far simpler and quicker to just drop that in there and say what to dp than to program a short python script