Comment on Is possible to run a LLM on a mini-pc like the GMKtec K8 and K9?
entropicdrift@lemmy.sdf.org 4 months agoI believe commercial LLMs have some kind of watermark when you apply AI for grammar and fixing in general, so I just need an AI to make these works undetectable with a private LLM.
That’s not how it works, sorry.
TheBigBrother@lemmy.world 4 months ago
I was talking about that with a friend some days ago, and they made an experiment, they just made the AI correct punctuation errors of a text document, no words at all which you can easily add manually, and the anti-AI system target 99% AI made, I don’t know how to explain that, maybe the text was AI generated also IDK or there is a watermark in some place, a pattern or something.
entropicdrift@lemmy.sdf.org 4 months ago
Just that they’re no easier to make fool an anti-AI system than using ChatGPT, Gemini, Bing, or Claude. Those AI detectors also give false positives on works made by humans. They’re unreliable in the first place.
Basically, they’re “boring text detectors” more than anything else.
TheBigBrother@lemmy.world 4 months ago
I have a friend who is running a business of doing homework on demand, he is using AI to do the work, he got back a work because AI generated content was detected on it, he used to employ real people to do the work but anyway real people used AI too sometimes, so he knows I’m a “hacker” LMAO and asked me if I knew any way to fool the anti-AI systems, I thought about running a private LLM and training it with real human generated content like ebooks depending on the subject of the work, do you think it could be possible to fool these things with this method?
entropicdrift@lemmy.sdf.org 4 months ago
So first of all, you shouldn’t involve yourself in your friend’s business. Fraud is generally frowned upon.
But secondly, you know that ChatGPT was trained on the entire internet, right? Like, every book. I don’t think “more books” is gonna help.
I hope you take your computer skills and make something of yourself. Try not to get any more involved in this scheme, seriously. You don’t need this crap marring your reputation.
Besides, there are better reasons/ways to fight the system than helping other people avoid learning.
hperrin@lemmy.world 4 months ago
Your “friend’s” business is very unethical. Maybe your friend should think about what they’re doing with their life, and quit doing this.
al4s@feddit.de 4 months ago
LLMs work by always predicting the next most likely token and LLM detection works by checking how often the next most likely token was chosen. You can tell the LLM to choose less likely tokens more often (turn up the heat parameter) but you will only get gibberish out if you do. So no, there is not.
TheBigBrother@lemmy.world 4 months ago
What about if you train the AI with human generated content? For example e-books?
1rre@discuss.tchncs.de 4 months ago
LLMs have a very predictable and consistent approach to grammar, punctuation, style and general cadence which is easily identifiable when compared to human written content. It’s kind of a watermark but it’s one the creators are aware of and are seeking to remove. That means if you want to use LLMs as a writing aid of any sort and want it to read somewhat naturally, you’ll have to either get it to generate bullet points and expand on them yourself, or get it to generate the content then rewrite it word for word in a style you’d write it in.