Absolutely. If they don’t care to actually read the texts, they have to accept the risks of not reading it.
Comment on Scientists reportedly hiding AI text prompts in academic papers to receive positive peer reviews
Mondez@lemdro.id 3 days agoI don’t see this as rotten behaviour at all, I see it as a Bobby tables moment teaching an organisation relying on a technology that they better have a their ducks in a row.
Treczoks@lemmy.world 3 days ago
Lemminary@lemmy.world 3 days ago
Por qué no los dos?
Ilovethebomb@sh.itjust.works 3 days ago
It’s an XKCD comic.
XeroxCool@lemmy.world 3 days ago
They didn’t ask what the comic was, they asked “but why not both?”. It can be both unethical and a lesson
meme_historian@lemmy.dbzer0.com 3 days ago
It’s still extremely shitty unethical behavior in my book since the negative impact is not felt by the organization that’s failing to validate their inputs, but your peers who are potentially being screwed out of a review process and a spot in journal or conference