no references whatsoever to false positive rates, which I’d assume are quite high
‘ChatGPT detector’ catches AI-generated papers with unprecedented accuracy
Submitted 1 year ago by floofloof@lemmy.ca to technology@lemmy.world
https://www.nature.com/articles/d41586-023-03479-4
Comments
demonsword@lemmy.world 1 year ago
downhomechunk@midwest.social 1 year ago
If you call heads 100% of the time, you’ll be 100% accurate on predicting heads in a coin toss.
EvilBit@lemmy.world 1 year ago
As I understand it, one of the ways AI models are commonly trained is basically to run them against a detector and train against it until they can reliably defeat it. Even if this was a great detector, all it’ll really serve to do is teach the next model to beat it.
magic_lobster_party@kbin.social 1 year ago
That’s how GANs are trained, and I haven’t seen anything about GPT4 (or DALL-E) being trained this way. It seems like current generative AI research is moving away from GANs.
KingRandomGuy@lemmy.world 1 year ago
Also one very important aspect of this is that it must be possible to backpropagate the discriminator. If you just have access to inference on a detector of some kind but not the model weights and architecture itself, you won’t be able to perform backpropagation and therefore can’t generate gradients to update your generator’s weights.
That said, yes, GANs have somewhat fallen out of favor due to their relatively poor sample diversity compared to diffusion models.
EvilBit@lemmy.world 1 year ago
I know it’s intrinsic to GANs but I think I had read that this was a flaw in the entire “detector” approach to LLMs as well. I can’t remember the source unfortunately.
CthulhuOnIce@sh.itjust.works 1 year ago
I really really doubt this, openai said recently that ai detectors are pretty much impossible. And in the article they literally use the wrong name to refer to a different AI detector.
Especially when you can change Chatgpt’s style by just asking it to write in a more casual way, “stylometrics” seems to be an improbable method for detecting ai as well.
Fredthefishlord@lemmy.blahaj.zone 1 year ago
It’s in openai’s best interests to say they’re impossible. Completely regardless of the truth of if they are, that’s the least trustworthy possible source to take into account when forming your understanding of this.
CthulhuOnIce@sh.itjust.works 1 year ago
openai had their own ai detector so I don’t really think it’s in their best interest to say that their product being effective is impossible
simple@lemm.ee 1 year ago
Willing to bet it also catches non-AI text and calls it AI-generated constantly
snooggums@kbin.social 1 year ago
The best part of that if AI does a good job of summarizing, then anyone who is good at summarizing will look like AI. Like if AI news articles look like a human wrote it, then a human written news article will look like AI.
floofloof@lemmy.ca 1 year ago
The original paper does have some figures about misclassified paragraphs of human-written text, which would seem to mean false positives. The numbers are higher than for AI-written text.
TropicalDingdong@lemmy.world 1 year ago
This is kind-of silly.
We will 100% be using AI to generate papers now and in the future. If the AI can catch any wrong conclusions or misleading interpretations, that would be helpful.
Not using AI to help you write at this point is you wasting valuable time.
Laticauda@lemmy.ca 1 year ago
Not using AI to help you write at this point is you wasting valuable time.
Bro WHAT are you smoking. In academia the process of writing the paper is just as important as the paper itself, and in creative writing why would you even bother being a writer if you just had an ai do it for you? Wasting valuable time? The act of writing it is inherently valuable.
TropicalDingdong@lemmy.world 1 year ago
Bro gotta get that shit out. Not doing the work to jerk off to the process.
Deckweiss@lemmy.world 1 year ago
I don’t understand. Are there places where using chatGPT for papers is illegal?
The state where I live explicitly allows it. Only plagiarism is prohibited. But making chatGPT formulate the result of your scientific work, or correct the grammar or improve the style, etc. doesn’t bother anybody.
alienanimals@lemmy.world 1 year ago
It’s not a big deal. People are just upset that kids have more tools/resources than they did. They would prefer kids wrote on paper with pencil and did not use calculators or any other tool that they would have available to them in the workforce.
BraveLittleToaster@lemm.ee 1 year ago
Teachers when I was little “You won’t always have a calculator with you” and here I am with a device more powerful than what sent astronauts to the moon in my pocket 24/7
Phanatik@kbin.social 1 year ago
There's a difference between using ChatGPT to help you write a paper and having ChatGPT write the paper for you. One invokes plagiarism which schools/universities are strongly against.
The problem is being able to differentiate between a paper that's been written by a human (which may or may not be written with ChatGPT's assistance) and a paper entirely written by ChatGPT and presented as a student's own work.
I want to strongly stress that in the latter situation, it is plagiarism. The argument doesn't even involve the plagiarism that ChatGPT does. The definition of plagiarism is simple, ChatGPT wrote a paper, you the student did not and you are presenting ChatGPT's paper as your own, ergo plagiarism.
kirklennon@kbin.social 1 year ago
Why should someone bother to read something if you couldn’t be bothered to write it in the first place? And how can they judge the quality of your writing if it’s not your writing?
Deckweiss@lemmy.world 1 year ago
Science isn’t about writing. It is about finding new data through scientific process and communicating it to other humans.
If a tool helps you do any of it better, faster or more efficiently, that tool should be used.
But I agree with your sentiment when it comes to for example creative writing.
agent_flounder@lemmy.world 1 year ago
To me this question hints at the seismic paradigm shift that comes from generative AI.
I struggle to wrap my head around it and part of me just wants to give up on everything. But… We now have to wrestle with questions like:
What is art and do humans matter in the process of creating it? Whether novels, graphic arts, plays, whatever else?
What is the purpose of writing?
What if anything is still missing from generative writing versus human writing?
Is the difference between human intelligence and generative AI just a question of scale and complexity?
Now or in the future, can human experience be simulated by a generative AI via training on works produced by humans with human experience?
If an AI can now or some day create a novel that is meaningful or moving to readers, with all the hallmarks of a literary masterwork, is it still of value? Does it matter who/what wrote it?
Can an AI have novel ideas and insights? Is it a question of technology? If so, what is so special about humans?
Do humans need to think if AI one day can do it for us and even do it better than we can?
Is there any point in existing if we aren’t needed to create, think, generate ideas and insights? If our intellect is superfluous?
If human relationships conducted in text and video can be simulated on one end by a sufficiently complex AI, to fool the human, is it really a friendship?
Are we all just essentially biological machines and our bonds simply functions of electrochemical interactions, instincts, and brain patterns?
I’m late to the game on all this stuff. I’m sure many have wrestled with a lot of this. But I also think maybe generative AI will force far more of us to confront some of these things.
TropicalDingdong@lemmy.world 1 year ago
If you use chatGPT you should still read over it, because it can say something wrong about your results and run a plagiarism tool on it because it could unintentionally do that. So whats the big deal?
There isnt one. Not that I can see.
Jesusaurus@lemmy.world 1 year ago
At least within a higher level education environment, the problem is who does the critical thinking. If you just offload a complex question to chat gpt and submit the result, you don’t learn anything. One of the purposes of paper-based exercises is to get students thinking about topics and understanding concepts to apply them to other areas.
gullible@kbin.social 1 year ago
I don’t think people are arguing against minor corrections, just wholesale plagiarism via AI. The big deal is wholesale plagiarism via AI. Your argument is as reasonable as it adjacent to the issue, which is to say completely.
Something_Complex@lemmy.world 1 year ago
I’m gonna need something more then that too belive it
macarthur_park@lemmy.world 1 year ago
The article is reporting on a published journal article. Surely that’s a good start?
KingRandomGuy@lemmy.world 1 year ago
I haven’t read the article myself, but it’s worth noting that in CS as a whole and especially ML/CV/NLP, selective conferences are generally seen as the gold standard for publications compared to journals. The top conferences include NeurIPS, ICLR, ICML, CVPR for CV and EMNLP for NLP.
It looks like the journal in question is a physical sciences journal as well, though I haven’t looked much into it.
LunchEnjoyer@lemmy.world 1 year ago
Didn’t OpenAI themselves state some time ago that it isn’t possible to detect it?
TheLurker@lemmy.world 1 year ago
Well with VC investment low due to higher interest rates it was only a matter of time before academic people started posting bullshit papers to lure that sweet sweet VC money.
Seems like a few people at the University of Kansas in Lawrence are making a run at a start up.
cyborganism@lemmy.ca 1 year ago
I say we develop a Voight-Kampff test as soon as possible for detecting if we’re speaking to a real person or an actual human being when chatting or calling a customer representative of a company.
agent_flounder@lemmy.world 1 year ago
if we’re speaking to a real person or an actual human being
Ummm …
cyborganism@lemmy.ca 1 year ago
Hahahaha OMG. I fixed it. Thanks!
nfsu2@feddit.cl 1 year ago
Isnt this like a constant fight between people who develop anti-ai-content and the internet pirates who develop anti-anti-ai-content? Pretty sure the piratea always win.
Overzeetop@kbin.social 1 year ago
You sully the good name of Internet Pirates, sir or madam. I'll have you know that online pirates have a code of conduct and there is no value in promulgating an anti-ai or anti-anti-ai stance within the community which merely wishes information to be free (as in beer) and readily accessible in all forms and all places.
You are correct that the pirates will always win, but they(we) have no beef with ai as a content generation source. ;-)
nfsu2@feddit.cl 1 year ago
Oh yes, by fight I mean that no matter how hard developers push proprietary software they get craked anyway, its so funny.
Ele7en7@lemmy.world 1 year ago
Reminds me of the trace buster buster buster
Satish@fedia.io 1 year ago
they still can't capture data written from Ai over websites like ' https://themixnews.com/' https://themixnews.com/cj-amos-height-age-brother/
rikonium@discuss.tchncs.de 1 year ago
Isn’t current precedent 0% accuracy already?
OKRainbowKid@lemmy.sdf.org 1 year ago
If you have 0% accuracy in a binary decision, you could just always choose the other option and be right 100% of the time.
rikonium@discuss.tchncs.de 1 year ago
Ahh, so the true rate would actually be 50% if it was no better than random chance?
joystick@lemmy.world 1 year ago
Doubt
givesomefucks@lemmy.world 1 year ago
Why?
Chatgpt writes them all the same. So its not so much “an AI wrote this” as it is “Bob always writes like this, we know Bob wrote this because _____”
It’s a bad headline, but the article immediately clarified.
fmstrat@lemmy.nowsci.com 1 year ago
Believable because:
So outside of its purview? Agree.