I run my course exams in biochemistry through AI chat sites, and these sites are curiously doing worse than two years ago. I think there is an active campaign by activists to feed AI misinformation. But the biggest problem for STEM applications is that if there has been a new discovery that changes paradigms, AI still quotes older incorrect outdated paradigms because of the mass of that text on the web.
Comment on Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims
FriendBesto@lemmy.ml 10 hours agoYeah, I have some background in History and ChatGTP will be objectively wrong with some things. Then I will tell it is wrong because X, Y and Z, and the the stupid thing will come back with, “Yes, you are right, X, Y, Z where a thing because…”.
If I didn’t know that it was wrong, or if say, a student took what it said at face value, then they too would now be wrong. Literal misinformation.
Not to mention the other times it is wrong, and not just chatGTP because it will source things like Reddit. Recently Brave AI it made the claim that Ironfox the Firefox fork was based on FF ESR. That is impossible since Ironfox is a fork for Android. So why was it wrong? Quoted some random guy who said that on Reddit.
SaveTheTuaHawk@lemmy.ca 4 hours ago
ganryuu@lemmy.ca 6 hours ago
I get the feeling that you’re missing one very important point about GenAI: it does not, and cannot (by design) know right from wrong. The only thing it knows is what word is statistically the most likely to appear after the previous one.