I’m viewing this as a warning for everyone to be more vigilant against an all-out bot invasion.
OP should be careful not to get banned though
Submitted 22 hours ago by mudkip@lemdro.id to [deleted]
https://files.catbox.moe/u8nnrl.png
I’m viewing this as a warning for everyone to be more vigilant against an all-out bot invasion.
OP should be careful not to get banned though
downvote wave incoming (watch out)
For the record, AI is a massive, cash-sucking data-harvesting racket funded by a bunch of billionaire assholes. It slurps up insane amounts of water every day and still can’t stop spewing garbage. I don’t use or support any of it, fuck that nonsense.
absolute insanity that this ai generated/copypasta answer is getting upvotes. you are a god at baiting lemmy users
Idk if this information can help us. But. I’m a cognitive researcher. Although linguistics is not my area I do a little bit about linguistics and human errors. Humans use language in constantly evolving creative ways. Difficult for LLMs.
There are also specific types of errors that humans make that are kind of unique to us.
These types of errors can be indicators of a real human. Because humans make them somewhat randomly. More likely to make them based on how tired they are and from “priming”. Which neither of those can exist in a language model.
Ok, so what “errors” am I talking about ? (By errors I mean language that deviates from grammar rules).
LLM models are largely trained on books and essays. Not on dialogue.
Writing like how we talk is harder for LLMs to interpret. They aren’t terrible at it when what we say is simple commands. But once it deviates from that, the LLM just pulls out keywords and does the best it can. Making errors.
“So, what you think is that it’s really the others ? Like, I don’t know what you mean”.
-How a person may talk that others can understand, based on context of conversation , but that makes zero sense isolated. It’s Very difficult for LLMs to understand that type of sentence.
This includes slang, invented creative use of words, and with common verbal grammar errors.
For instance. I might say in real life “that pizza was fire”. You know what I mean. LLM model might think I meant the pizza was cooked in a fire oven or burnt.
If I use an emoji for the pizza or fire, The LLM struggles even more to define an appropriate response/interpretation
LLMs don’t actually interpret anything. So I do not mean that in the literal sense. I’m still talking about pattern matching.
Just to clarify.
Anyway.
Slang and sayings change very fast. Humans can keep up with them. LLMs struggle because they change meaning quickly and go out of style as fast as they come in.
Another human error is when we use a word that “looks” similar to the correct word but is not semantically related.
it’s not a similar meaning word. It actually makes no sense. But it “looks” like the correct word.
For example. Someone might be describing a “platonic” relationship and use the word “planting”.
These words both start with “pl”, are about the same length, And the g in planting has a “c” shape within it.
If you see text with these type of errors it’s likely a human.
Another common human error is editing errors.
For instance you might have noticed sometimes the 2nd or even 3rd word in my sentences have uppercase. This is due to me editing the text to add a better start to the sentence after I already wrote it.
And I can’t be bothered to remove the incorrect capital letter.
This is something a human would do. And it’s location would make sense to other humans. Because we understand intuitively how language can be reduced. A LLM does not. Because we rarely reduce language in books, essays, or even in typing. But we do , do it a lot in natural verbal conversations.
Anyway. I worried putting this out would potentially be used against us. But I also don’t think LLMs can side step these issues. If they try to add errors, it’s going to result in incoherent garble. Because humans errors are not statistically systematic. Though they do follow systematic cognitive errors that can be predicted if you understand how priming works. But not at a level that a LLM could do.
More so they can be backwards predicted. Not forward predicted.
I can recognize the error and make some likely predictions what caused it. But I can not predict an error that has not occured yet based on possible causes because the possible causes are virtually un measurable and can’t be identified.
Hope that’s not too confusing. Wow this is getting long.
If anyone who reads this has any questions, or thoughts on the topic, please comment.
May run a little JavaScript to see if the extension is there or something? I want to try but I don’t want to install that software to test
This tool is gonna flood every forum and comment section now, why was this made
s@piefed.world 14 hours ago
Devs and admins, take this post as a herald warning of what is to come