I need it to make academic works pass the anti-ai systems, what do you recommend for that work? It’s for business so I need a reasonable good performance but nothing extravagant…
Comment on Is possible to run a LLM on a mini-pc like the GMKtec K8 and K9?
StrawberryPigtails@lemmy.sdf.org 7 months ago
It’s doable. Stick to the 7b models and it should work for the most part, but don’t expect anything remotely approaching what might be called reasonable performance. It’s going to be slow. But it can work.
To get a somewhat usable experience you kinda need an Nvidia graphics card or an AI accelerator.
TheBigBrother@lemmy.world 7 months ago
entropicdrift@lemmy.sdf.org 7 months ago
I believe commercial LLMs have some kind of watermark when you apply AI for grammar and fixing in general, so I just need an AI to make these works undetectable with a private LLM.
That’s not how it works, sorry.
TheBigBrother@lemmy.world 7 months ago
I was talking about that with a friend some days ago, and they made an experiment, they just made the AI correct punctuation errors of a text document, no words at all which you can easily add manually, and the anti-AI system target 99% AI made, I don’t know how to explain that, maybe the text was AI generated also IDK or there is a watermark in some place, a pattern or something.
entropicdrift@lemmy.sdf.org 7 months ago
Just that they’re no easier to make fool an anti-AI system than using ChatGPT, Gemini, Bing, or Claude. Those AI detectors also give false positives on works made by humans. They’re unreliable in the first place.
Basically, they’re “boring text detectors” more than anything else.
al4s@feddit.de 7 months ago
LLMs work by always predicting the next most likely token and LLM detection works by checking how often the next most likely token was chosen. You can tell the LLM to choose less likely tokens more often (turn up the heat parameter) but you will only get gibberish out if you do. So no, there is not.
1rre@discuss.tchncs.de 7 months ago
LLMs have a very predictable and consistent approach to grammar, punctuation, style and general cadence which is easily identifiable when compared to human written content. It’s kind of a watermark but it’s one the creators are aware of and are seeking to remove. That means if you want to use LLMs as a writing aid of any sort and want it to read somewhat naturally, you’ll have to either get it to generate bullet points and expand on them yourself, or get it to generate the content then rewrite it word for word in a style you’d write it in.
hperrin@lemmy.world 7 months ago
Maybe just write the academic works yourself, then they should pass.
TheBigBrother@lemmy.world 7 months ago
My friend used to employ several people for that, but they started using AI to work less so he decided to start doing by his own with AI instead of paying someone else to do the same.
hperrin@lemmy.world 7 months ago
So your “friend’s” unethical business hired unethical workers and now you’ve come here to ask for advice on running your unethical business without paying anyone. Got it.
MangoPenguin@lemmy.blahaj.zone 7 months ago
Something with a GPU that’s good for LLMs would be best.
1rre@discuss.tchncs.de 7 months ago
Intel Arc also works surprisingly fine and consistently for ML if you use llama.cpp for LLMs or Automatic for stable diffusion, it’s definitely much closer to Nvidia in terms of usability than it is to AMD
TheBigBrother@lemmy.world 7 months ago
You would suggest the K9 instead of the K8?