The scientific community needs to gather and reach a consensus where AI is banned from writing their papers.
A weird phrase is plaguing scientific papers – and we traced it back to a glitch in AI training data
Submitted 11 months ago by Fallstar@mander.xyz to technology@lemmy.world
Comments
yuki2501@lemmy.world 11 months ago
MuskyMelon@lemmy.world 11 months ago
GIGO overcomes all
Archangel1313@lemm.ee 11 months ago
So, all those research papers were written by AI? Huh.
angrystego@lemmy.world 11 months ago
No, they were not. AI was probably used for translation.
wewbull@feddit.uk 11 months ago
Translating is the process of rewriting the paper in another language. The paper has been written (in English) by an LLM.
crystalmerchant@lemmy.world 11 months ago
The phrase is “vegetative electron microscopy”
catloaf@lemm.ee 11 months ago
And it looks more like a machine translation error than anything else. Per the article, there was a dataset with two instances of the phrase being created from bad OCR. Then, more recently, somehow the bad phrase got associated with a typo: in Farsi, the words “scanning” and “vegetative” are extremely similar. Thus, when some Iranian authors wanted to translate their paper to English, they used an LLM, and it decided that since “vegetative electron microscope” was apparently a valid term (since it was included in its training data), that’s what they meant.
It’s not that the entire papers were being invented from nothing by Chatgpt.
wewbull@feddit.uk 11 months ago
It’s not that the entire papers were being invented from nothing by Chatgpt.
Yes it is. The papers are the product of an LLM. Even if the user only thought it was translating, the translation hasn’t been reviewed and has errors. The causal link between what goes in to an LLM and what comes out is not certain, so if nobody is checking the output it could just be a technical sounding lorem ipsum generator.
criitz@reddthat.com 11 months ago
It’s been found in many papers though. Do they all have such excuses?
TachyonTele@lemm.ee 11 months ago
Don’t use fucking AI to write scientific papers and the problem is solved. Wtf.
Cryophilia@lemmy.world 11 months ago
More salient takeaway is, don’t use a LLM to translate a scientific paper. Because it can’t translate a scientific paper. It can only rewrite the entire paper, in a different language. And it will introduce misunderstandings and hallucinations.
HailSeitan@lemmy.world 11 months ago
Let’s delve into the issue
Telorand@reddthat.com 11 months ago
The lede is buried deep in this one. Yeah, these dumb LLMs got bad training data that persists to this day, but more concerning is the fact that some scientists are relying upon LLMs to write their papers. This is literally the way scientists communicate their findings to other scientists, lawmakers, and the public, and they’re using fucking predictive text like it has cognition and knows anything.
Sure, most (all?) of those papers got retracted, but those are just the ones that got caught. How many more are lurking out there with garbage claims fabricated by a chatbot?
Thankfully, science will inevitably sus those papers out eventually, as it always does, but it’s shameful that any scientist would be so fatuous to put out a paper written by a dumb bot. You’re the experts. Write your own goddamn papers.
Ledericas@lemm.ee 11 months ago
oh yea,not to mention alot of papers tend to be low quality before the AI was used, ive been hearing people are writing dozens of papers just to fluff up thier resume/cv. it was quanitity over quality. i was in an presentation where the guy presenting thier research wrote 40+ papers just to get hired a university somewhere.
BussyCat@lemmy.world 11 months ago
They were translating them not actually writing them like obviously it should have been caught by reviewers but that’s not nearly as bad
wewbull@feddit.uk 11 months ago
Translating them…otherwise know as rewriting the whole paper.
dgriffith@aussie.zone 11 months ago
Thankfully, science will inevitably sus those papers out eventually, as it always does,
In the future, all search engines will have an option to ignore any results from 2022-20xx, the era of AI slop.
unexposedhazard@discuss.tchncs.de 11 months ago
Its the immediate takeaway i made from the headline, so i dont feel like its buried deep
Telorand@reddthat.com 11 months ago
It’s not mentioned at all in the article, so what you inferred from the headline is not what the author conveyed.
adespoton@lemmy.ca 11 months ago
In some cases, it’s people who’ve done the research and written the paper who then use an LLM to give it a final polish. Often, it’s people who are writing in a non-native language.
Doesn’t make it good or right, but adds some context.
wewbull@feddit.uk 11 months ago
Adding extra polish like nonsense phrases. Nobody is supervising it then.
Telorand@reddthat.com 11 months ago
Sure, and I’m sympathetic to the baffling difficulties of English, but use Google Translate and ask someone who’s more fluent for help with the final polish (as a single suggestion). Trusting your work, trusting science to an LLM is lunacy.
Letsdothisok@lemmy.world 11 months ago
Super interesting. But also, super boring.