This hasn’t been my experience. Yes, chatgpt gets stuff wrong, and fairly regularly. But I can ask it my question directly, and can include sample code, and I get an answer immediately. Anyone going on stack overflow has to either google around and sift through answers for relevance, or has to post the question and wait for someone to respond.
With either chatgpt or stack you have to check the answer to make sure it works - that’s how coding goes. But one I know if it works or not pretty much immediately with fairly low investment of time and effort. And if it doesn’t, I just rephrase the question, or literally say “that doesn’t seem to work, now I’m getting this error: $error”
GenderNeutralBro@lemmy.sdf.org 1 year ago
If you need a correct answer, you’re doing it wrong!
I’m joking of course, but there’s a seed of truth: I’ve found ChatGPT’s wrong or incomplete answers to be incredibly helpful as a starting point. Sometimes it will suggest a Python module I didn’t even know about that does half my work for me. Or sometimes it has a lot of nonsense but the one line I actually need is correct (or close enough for me to understand).
Nobody should be copying code off Stack Overflow without understanding it, either.
sj_zero 1 year ago
I won't pretend, it can be very useful as long as you know what it is and the fact that very often it will make stuff up out of whole cloth.
I was trying to figure out how to make a change to a rest program that I use (lotide which I'm posting from, actually) but I don't know anything about rust. So it ended up leading me down the rabbit hole with a library that just didn't exist, and all kinds of routines that didn't exist, but ultimately I did get there. Ended up using regex instead.
GenderNeutralBro@lemmy.sdf.org 1 year ago
LOL damn. I haven’t had that experience myself, but that’s probably because it has more training data on Python than Rust.
I think the future of AI will be A) more specialized training, and B) more “dumb” components to keep it on track.