The biggest issue with publicly available ML based text tools is that they’re American centric. Detection of ChatGPT in the UK is simple - it creates texts using American spelling. And if you live outside of English speaking world, like most humans do, it’s completely useless.
Comment on There is no such thing as an effective "AI detector", nor will there ever be one.
Zeppo@sh.itjust.works 1 year ago
Good summary of the issues. I’ve been fairly disappointed with what a lot of people think the AI text generators are good for - replacement for search engines, magic oracle that can tell you any fact, something to write legal briefs. And the people who generate documents and then don’t even proof read or fact checking them before using them for something important… Some uses are good, like basic code generation for programming tasks, but many are just silly.
The instances where some professor with zero clue about how AI text generation works or the issues you outline here has told a student “My AI detector said this was generated!” have been absurd, like one professor with obvious serious misunderstandings told a student “I asked ChatGPT if it wrote this and it said yes.”
Aux@lemmy.world 1 year ago
Zeppo@sh.itjust.works 1 year ago
So far, yes, only because they’ve been developed in the US and therefore trained on US English text. Eventually someone can make models for other languages and regions, but it is a lot of work and very expensive.
Hamartiogonic@sopuli.xyz 1 year ago
I think Bing did a pretty good job at coming up with name suggestions for some Sims characters. Playing with a virtual doll house is in the more harmless end of the spectrum, but obviously people want to try LLMs with all sorts of tasks, where the stakes are much higher and consequences could be severe.
The more you use it, the more you’ll begin to understand how much you can or cannot trust an LLM. A sensible person would become more suspicious of the results, but people don’t always make sensible decisions.
Crackhappy@lemmy.world 1 year ago
Not to mention that this “AI” is in no way actually AI. It’s just ML taken to a new level.
SkaveRat@discuss.tchncs.de 1 year ago
It’s not an AGI, but it’s still AI
deong@lemmy.world 1 year ago
There’s no real distinction between the two. We don’t have a definition of AI or intelligence — never have. Inside the field, ML has some recognized connotations, but outside of specialist literature, they’re just marketing fluff.
Zeppo@sh.itjust.works 1 year ago
It’s interesting that it started a conversation about “if this thing can make output exactly like a human, does it matter?” but I agree… it’s not conscious or ‘thinking’ about what it says. The output sure can be convincing, though.
FlyingSquid@lemmy.world 1 year ago
I think a huge way that it matters is that it doesn’t ask questions.
Zeppo@sh.itjust.works 1 year ago
That’s a very good point. Even Eliza asked questions (and the last thing we need now is a ChatGPT therapist mode). It’s also a matter of what it’s programmed to do, but I don’t believe that the system has awareness or curiosity.
Crackhappy@lemmy.world 1 year ago
There is a fundamental difference between recombinant regurgitation and creation.