Smells like bullshit. The graphs they showed in the source paper with their accuracy at like 100% for every year seem even more like bullshit. Did they run the model over the training data or what?
Maybe I’m wrong but text is just way too high signal to noise medium to be able to tell if it was written by an AI. The false positives would be high enough that it’s effectively useless. Does anyone have another perspective on this? If I’m missing some nuance here I’d love to understand more.
Doombot1@lemmy.one 1 year ago
Interesting that the article ends with “The new ChatGPT catcher even performed well with introductions from journals it wasn’t trained on”. Isn’t that the whole point? If you just judge a model based on what it was trained on, you just get a biased model. I can’t remember the exact word for it but it’s essentially over-relying on your own dataset. So of course it will get near-100% accuracy on what it was trained with. I’d be curious to see what the accuracy on other papers is.
MagosInformaticus@sopuli.xyz 1 year ago
Overfitting is the normal term.
Doombot1@lemmy.one 11 months ago
There we go, thanks for the addition! I did a lot of ML/DL stuff about 2 years ago but just couldn’t remember the term.