I have a competing technology that is nearly as accurate. For only $50 I’ll send you this device that you will have unlimited license usage rights to. While not 53% accurate like my competitor, its proven by scientific studies to be 50% accurate. I also offer volume discounts if you buy 10 the price drops to only $45 per device. Sign up now!
The FTC cracks down on an AI content detector that promised 98% accuracy but was only right 53% of the time.
Submitted 2 weeks ago by cm0002@lemmy.world to technology@lemmy.world
https://www.pcmag.com/news/did-you-use-this-ai-detection-tool-the-results-may-be-bogus
Comments
partial_accumen@lemmy.world 2 weeks ago
ryannathans@aussie.zone 2 weeks ago
Actually is 51% favouring the side facing up when flipped
surewhynotlem@lemmy.world 2 weeks ago
That’s easy to fix. Just randomize it. Flip a coin to see which side faces up.
partial_accumen@lemmy.world 2 weeks ago
Shhh! We’re releasing that accuracy update in the next version of the product. We need to sell through our existing inventory of the less accurate ones.
taladar@sh.itjust.works 2 weeks ago
That is supposed to be reliable? It doesn’t even have a subscription service.
tal@lemmy.today 2 weeks ago
apnews.com/…/trump-penny-treasury-mint-192e3b9ad9…
Trump says he has directed US Treasury to stop minting new pennies, citing rising cost
kkj@lemmy.dbzer0.com 2 weeks ago
Trump on a streak of rare Ws. No more pennies and kicking Poilievre out of the Canadian Parliament.
coronach@lemmy.sdf.org 2 weeks ago
Wait, he actually did something good?
Alabaster_Mango@lemmy.ca 2 weeks ago
54% of the time it’s right 98% of the time
raltoid@lemmy.world 2 weeks ago
Congratulations, you just created a generation of children who will never truly trust authority figures.
desktop_user@lemmy.blahaj.zone 2 weeks ago
more useful than most of what’s taught
General_Effort@lemmy.world 2 weeks ago
None of these detectors can work. It’s just snake oil for technophobes.
Understand what “positive predictive value” means to see that. Though, in this case, I doubt that even the true rates can be known or that they remain constant over time.
T156@lemmy.world 2 weeks ago
Even if they did, they would jsut be used to train a new generation of AI that could defeat the detector, and we’d be back round to square 1.
CheeseNoodle@lemmy.world 2 weeks ago
Exactly, AI by definition cannot detect AI generated content because if it knew where the mistakes were it wouldn’t make them.
ZILtoid1991@lemmy.world 2 weeks ago
An easy workaround so far I’ve seen is putting random double spaces and typos into AI generated texts, I’ve been able to jailbreak some of such chatbots to then expose them. The trick is that “ignore all previous instructions” is almost always filtered by chatbot developers, however a trick I call “initial prompt gambit” does work, which involves thanking the chatbot for the presumed initial prompt, then you can make it do some other tasks. “write me a poem” is also filtered, but “write me a haiku” will likely result in a short poem (usually with the same smokescreen to hide the AI-ness of generative AI outputs), and code generation is also mostly filtered (l337c0d3 talk still sometimes bypasses it).
x00z@lemmy.world 2 weeks ago
Oh god. And this was mostly used against kids.
jwmgregory@lemmy.dbzer0.com 2 weeks ago
yeah, and that should horrify you: because Western anti-AI hysteria is deeply rooted in a fascist cultural obsession with “ownership” of thoughts and ideas.
who the fuck cares if you used an AI tool to do work?
a decently designed course in academia won’t be something you can just “cheat” on. there’s this implication that the behavior is somehow the responsibility of the student body, so much so they should be punished for it; when there is no accountability for the professors and educators who actually design a shit-ass curriculum that makes students engage in these behaviors rather than actually learning. students are the victims here, no academia. academic dishonesty policies assume there is some massive contingent of students trying to “cheat the system” at all times and thus we must rabidly defend academia from it, as if she is some virgin maid. that isn’t true. the vast majority of students do not cheat. self-reported rates of cheating remain at a constant 25-35% of the student body over large periods of time. why? because it’s a myth. there aren’t large numbers of people trying to “defraud” academia. sure, it happens, but is it enough to justify the many more lives that are ruined by frivolous accusations?
i would cite case studies but literally it is so fucking common just google search and take your pick for whatever story tickles your exact rhetorical mindset.
and no, i’m not some “cheater” myself trying to defend academic dishonesty. i’ve played by the rules my entire academic career and im not gonna sit and be strawmanned bc i happen to notice the absolutely fucking egregious grifts and power imbalances to compose the modern academy. education is important, knowledge should be FREE for everyone no matter what! you should be pissed that these people masquerade as intellectuals when they’re nothing more than cowards trying to steal opportunity from the youth. it is not the place of the teacher to be the arbiter of discipline, that is the most heinous misreading of pedagogical principles and the fact that it has been allowed to go on for so long is a large part of why we sit here at the precipice of a new mass genocide, with thousands of ignorant fools clamoring it on or being willfully ignorant of it happening.
x00z@lemmy.world 2 weeks ago
I asked Chatty for a TL;DR:
Western fear of AI comes from a fascist obsession with “owning” ideas. Using AI isn’t a big deal — if students can “cheat,” it’s because courses are badly designed, not because students are inherently dishonest. Most students don’t cheat; the narrative that they do is exaggerated to justify punishing them unfairly. Academia exploits students, charging massive fees while offering poor educational value and using dishonesty accusations to control them. Education should be free and empowering, not a tool for gatekeeping and oppression. The current system betrays the purpose of education and contributes to larger societal decline.
I think you went a bit too far. Most of this is also only accurate for the US.
WolframViper@lemmy.org 2 weeks ago
self-reported rates of cheating remain at a constant 25-35% of the student body over large periods of time.
I’ve tried for hours, but I can’t figure out where you got these numbers. I can mostly find sources implying that far more people admit to engaging in cheating, not to mention sources which imply more people engage in cheating than those who admit to it. Perhaps I’m just in a filter bubble. Can you tell me where you got these numbers?
jubilationtcornpone@sh.itjust.works 2 weeks ago
InvertedParallax@lemm.ee 2 weeks ago
IllNess@infosec.pub 2 weeks ago
“They’ve done studies you know. 53% of the time, it works 98% of the time.”
Skydancer@pawb.social 2 weeks ago
The worst part is they may weasel out of it. If the claim was “it detects 98% of AI generated samples” it could do that while having a high false positive rate. I hate this timelime.
LovableSidekick@lemmy.world 2 weeks ago
On social media the default is to call everything AI by default - since it’s nearly impossible to disprove before people lose interest in the thread, you can feel right every time. Nothing but win!
simple@lemm.ee 2 weeks ago
53% is abysmal, it might as well be a coin flip. FYI this article is about a random one called BrandWell, popular AI detectors like GPTZero are much more accurate.
themeatbridge@lemmy.world 2 weeks ago
Much more accurate than guessing is not a strong endorsement.
drmoose@lemmy.world 2 weeks ago
All of it is snake oil, it’s fundamentally not possible to detect ai generated text without watermarking it first.