Similarly, other users told the outlet that the AI hallucinated wrong answers or miscalculated spreadsheets. AI experts, including The Wharton School professor, Ethan Mollick accused Copilot of making bizarre suggestions for weekend meetings.
It seems these users never used GPT
kescusay@lemmy.world 8 months ago
Copilot isn’t actually bad, it’s just that you need to be careful with it and recognize it’s limitations.
Writing a bunch of REST endpoints for an API and need to implement all the typical http verbs, and you already have all the matching methods for reading, updating, and deleting values in a complex SQL database for each endpoint to call? Copilot can turn a ten minute chore into a ten second one. Very handy.
Writing those complex SQL methods in the first place? Yeah… Copilot will probably make a ton of mistakes and its work will need to be triple-checked. You’ll save time just doing it yourself if you know how. (And if you don’t, you have no business calling yourself a developer.)
Copilot is best for easy boilerplate and repetitive code. Problems arise as soon as you ask it to get “creative.”
DumbAceDragon@sh.itjust.works 8 months ago
This is about the other copilot.
kubica@kbin.social 8 months ago
microsoft and its names... like the VS editor and the other VS editor.
tabular@lemmy.world 8 months ago
Do you mean a revision or something different “copilot”. If the latter then this confusion is brought upon by Microsoft, the company that name a successor to a gaming console “Xbox One”.
kescusay@lemmy.world 8 months ago
Yeah, I figured that out eventually, but also figure the same probably applies to the other Copilot. Same underlying technology.
Wish Microsoft would use different names for different implementations.
TimeSquirrel@kbin.social 8 months ago
One time I decided for shits and giggles to just keep pushing tab and see where it went. It didn't take long for it to enter a useless recursive loop, hallucinating a new iteration on each line.
kelvie@lemmy.ca 8 months ago
I mean didn’t we all do this when phones started autocompleting sentences like a decade ago? (Or however long it was, time perception is fickle)
joel_feila@lemmy.world 8 months ago
It will if employeers only want ai code
lemmyvore@feddit.nl 8 months ago
Is it me or is this a weird statement for what’s supposed to be an exact science?
Imagine working in construction and using a level and you’re told “it’s not that it’s a bad level, you just gotta be careful with it”.
How much margin for error should we allow for getting our code right? Is it now acceptable if we only get 80% right?
kescusay@lemmy.world 8 months ago
It’s more like you get some kind of weird construction multitool that promises to be a level, a drill, a hammer, and a dozen other things, and it turns out to be a really good, innovative, and helpful level… and a really bad everything else.
pezhore@lemmy.ml 8 months ago
I use copilot a bit for my work - and I treat it like copy-paste from StackOverflow - sure that codeat look right, but you’ve gotta double check it and test it a few times before you commit and push.
birbs@lemmy.world 8 months ago
As a software developer I promise you that software development is very much not an exact science.
Programs are complex and there are so many different ways of achieving the same thing that all code has problems and gets a bit messy in places. You can test, but it’s not easy to ensure that everything works the way it should.
The best code you’re going to get will probably be in the space industry, but even that will have bugs. The best you can do is make the code robust even when bugs make things go wrong.
In many cases copilot will do just as well as a junior developer. It’s very good at repetitive tasks and filling gaps in your existing code.
friend_of_satan@lemmy.world 8 months ago
Always ask it to write tests for the code it generates