Comment on ChatGPT is losing some of its hype, as traffic falls for the third month in a row
demlet@lemmy.world 1 year agoYikes, I would be very scared to take anything ChatGPT says as accurate. Google keeps trying to get me to use theirs when I do searches, and I refuse.
Prandom_returns@lemm.ee 1 year ago
I think (hope) that peroson is being facetious.
I hope people are smart enough to understand that the statistical sentence generators don’t “know” anything.
penguin@sh.itjust.works 1 year ago
It can generate simple stuff accurately quite often. You just have to keep in mind that it could be dead wrong and you have to test/verify what it says.
Sonetimes I feel like a few lines of code should be doable in one line using a specific technique, so I ask it to do that and see what it does. I don’t just take what it says and use it, I see how it tried to solve it and then check it. For example by looking up if the method it used exists and reading the doc for that method.
Exact same as what I would do if I saw someone on stack overflow or reddit recommending something.
ribboo@lemm.ee 1 year ago
It’s just very quick at doing simple things you already could do - or doing things that you’d need to think about for a couple of minutes.
I wouldn’t trust it to do things I couldn’t achieve. But for stuff I could, it’s often much quicker. And I’m well equipped to check what it’s doing myself.
Prandom_returns@lemm.ee 1 year ago
A parrot can generate sentences with a 100% correct and proficient outcome, but it’s just using sounds their owner taught them.
Garbage in, garbage out.
Even the smartest, most educated people are never 100% sure of anything, just because there’s always nuances.
These engines are fed information that is written witg 100% surety, completely devoid of nuance. These engines will not produce “answers to questions” that are correct, because “correct” is fluid.
ribboo@lemm.ee 1 year ago
Meh.
That’s a very fallibilistic viewpoint. There are lots of certainties that can be answered correctly.
demlet@lemmy.world 1 year ago
You may be right now that I reread their comment.