ChatGPT is banned by my employer, because they don’t want trade secrets being leaked, which IMO is fair enough. We work on ML stuff anyway.
Anyway, we have a junior engineer that has been caught using ChatGPT several times, whether it’s IT flagging its use, seeing a tab open in their browser during a demo, or simply just seeing code they obviously didn’t write in code I’m reviewing.
I recently tried to help them out on a project that uses React, and it is clear as day that this engineer cannot write code without ChatGPT. The library use is all over the place, they’ll just “invent” certain API’s, or they’ll use things that were deprecated/don’t work if you’ve even attempted to think about the problem. IMO, reliance on ChatGPT is much worse than how juniors used to be reliant on Stack Overflow to find answers to copy paste.
blackbirdbiryani@lemmy.world 1 year ago
For the love of God, if you’re a junior programmer you’re overestimating your understanding if you keep relying on chatGPT thinking ‘of course I’ll spot the errors’. You will until you won’t and you end up dropping the company database or deleting everything in root.
All ChatGPT is doing is guessing the next word. And it’s trained on a bunch of bullshit coding blogs that litter the internet, half of which are now chatGPT written (without any validation of course).
If you can’t take 10 - 30 minutes to search for, read, and comprehend information on stack overflow or docs then programming (or problem solving) just isn’t for you. The junior end of this feel is really getting clogged with people who want to get rich quick without doing any of the legwork behind learning how to be good at this job, and ChatGPT is really exarcebating the problem.
chicken@lemmy.dbzer0.com 1 year ago
A lot of the time this is just looking for syntax though; you know what you want to do, and it’s simple, but it is gated behind busywork. This is to me the most useful part about ChatGPT, it knows all the syntax and will write it out for you and answer clarifying questions so you can remain in a mental state of thinking about the actual problem instead of digging through piles of junk for a bit of information.
CoopaLoopa@lemmy.dbzer0.com 1 year ago
Somehow you hit an unpopular opinion landmine with the greybeard devs.
For the greybeard devs: Try asking ChatGPT to write you some Arduino code to do a specific task. Even if you don’t know how to write code for an Arduino, ChatGPT will get you 95% of the way there with the proper libraries and syntax.
No way in hell I’m digging through forums and code repos to blink an led and send out a notification through a web hook when a sensor gets triggered. AI obviously can’t do everything for you if you’ve never coded anything before, but it can do a damn good job of translating your knowledge of one programming language into every other programming language available.
wizardbeard@lemmy.dbzer0.com 1 year ago
The more you grow in experience the more you’re going to realize that syntax and organization is the majority of programming work.
When you first start out, it feels like the hardest part is figuring out how to get from a to b on a conceptual level. Eventually that will become far easier.
You break the big problem down into discrete steps, then figure out the besy way to do each step. It takes little skill to say “the computer just needs to do this”. The trick is knowing how to speak to the computer in a way that can make sense to the computer, to you, and to the others who will eventually have to work with your code.
You’re doing the equivalent of a painter saying “I’ve done the hard part of envisioning it in my head! I’m just going to pay some guy on fiver to move the brush for me”
el_bhm@lemm.ee 1 year ago
Just a few days ago I read an article on the newest features of Kotlin 1.9. Zero of it was true.
Internet is littered with stuff like this.
If model is correct, you are correct. If model is not correct, you are working on false assumptions.
bear@slrpnk.net 1 year ago
Never ask ChatGPT to write code that you plan to actually use, and never take it as a source of truth. I use it to put me on a possible right path when I’m totally lost and lack the vocabulary to accurately describe what I need. Sometimes I’ll ask it for an example of how sometimes works so that I can learn it myself. It’s an incredibly useful tool, but you’re out of your damn mind if you’re just regularly copying code it spits out. If you don’t know the syntax well enough to write it yourself, how the hell do you plan to error check it?
blackbirdbiryani@lemmy.world 1 year ago
I write a lot of bash and I still have to check syntax every day, but the answer to that is not chatGPT but a proper linter like shell check that you can trust because it’s based on a rigid set of rules, not the black box of a LLM.
I can understand the syntax justification for obscure languages that don’t have a well written linter, but if anything that gives me less confidence about CHATGPT because it’s training material for an obscure language is likely smaller.
state_electrician@discuss.tchncs.de 1 year ago
ChatGPT cannot explain, because it doesn’t understand. It will simply string together a likely sequence of characters. I’ve tried to use it multiple times for programming tasks and found each time that it doesn’t save much time, compared to an IDE. ChatGPT regularly makes up methods or entire libraries. I do like it for creating longer texts that I then manually polish, but any LLM is awful for factual information.
pkill@programming.dev 1 year ago
more like you’ll end up wasting a significant amount of time debugging not only the problem, but also chatGPT, trying to correct the bullshit it spews out, often ignoring parts of your prompt
apinanaivot@sopuli.xyz 1 year ago
You are saying that as if it’s a small feat. Accurately guessing the next word requires understanding of what the words and sentences mean in a specific context.
blackbirdbiryani@lemmy.world 1 year ago
Don’t get me wrong, it’s incredible. But it’s still a variation of the Chinese room experiment, it’s not a real intelligence, but really good at pretending to be one. I might trust it more if there were variants based on strictly controlled datasets.
worldsayshi@lemmy.world 1 year ago
Yup. Accurately guessing the next thought (or action) is all brains need to do so I don’t see what the alleged “magic” is supposed to solve.
Hazzia@discuss.tchncs.de 1 year ago
The best thing that’s come out of this ChatGPT bullshit is making me feel like I’m actually good at my job. To be clear, I’m not - but at the very least I can reverse engineer functional code and logically map out what I think is supposed to be happening. The bare minimum that should be required, and yet here we are, with me being able to lord my wizardry over the ChatGPT peasantry.
LemmyIsFantastic@lemmy.world 1 year ago
It’s okay old man. There is a middle there where folks understand the task but aren’t familiar with the implementation.