Do you understand how they work or not? First I take all human text online. Next, I rank how likely those words come after another. Last write a loop getting the next possible word until the end line character is thought to be most probable. There you go that’s essentially the loop of an LLM. There are design elements that make creating the training data quicker, or the model quicker at picking the next word but at the core this is all they do.
It makes sense to me to accept that if it looks like a duck, and it quacks like a duck, then it is a duck, for a lot (but not all) of important purposes.
I.e. the only duck it walks and quacks like is autocomplete, it does not have agency or any other “emergent” features. For something to even have an emergent property, the system needs to have feedback from itself, which an LLM does not.
Skates@feddit.nl 1 year ago
If I were to send you a video of a duck quacking, would you abandon going to the supermarket in the hope that your computer/phone/whatever you watch it on will now be able to lay eggs?
Listen. It was made to look like a duck. It was made to quack like a duck. It is not a duck. It is a painting of a duck. It won’t fly, it won’t lay eggs, it won’t feel pain, it won’t shit all over the floors. It’s not a damn duck, and pretending it is just because it looks like it and it quacks, is like wanting to marry a fleshlight because it’s really good at sex and never disagrees with you. Sure, go ahead and do it - but don’t goddamn expect it to actually give birth, that’s not its purpose.
gandalf_der_12te@feddit.de 1 year ago
Edgy comment here but:
In another thread we were discussing AI-generated CSAM. Thread:
feddit.de/post/6315841
You would probably agree, then, that such material is not problematic, because even if it looks like CSAM, and it quacks like CSAM, it is not CSAM, therefore we don’t have to take it seriously or regulate it in similar ways that we do regulate actual CSAM, if I continue your logic, no?
wildginger@lemmy.myserv.one 1 year ago
very very very different, because the AI image is intentionally attempting to realistically imitate an existing, living, human victim, and because hyper realistic child pornographic art is illegal.
Pedophiles have been making loads of AI child porn. But its legal as long as it doesnt attempt to “look realistic” whatever that means, and isnt trying to look like a real person.
Laws might change in the future, but currently AI child porn slips between the same lines that 2d cartoon child porn does.