most people, unfortunately, don’t seem to think when they see the letters ‘A’ and ‘I’… these people probably would burn sage at the sight of the identity matrix lol.
i think you’re probably wasting your breath here but you seem like you might be cool, so if you’re interested in discussing ML at all reach out fs!
Chronographs@lemmy.zip 4 months ago
Every single one of those I’d put under the second category. It’d be hard to detect but it’s certainly not subjective. It just depends on how it’s written.
jwmgregory@lemmy.dbzer0.com 4 months ago
i’m fucking dead you have to be taking the piss lmfao
Chronographs@lemmy.zip 4 months ago
Whether AI art is good is subjective, it will change based on the whims of who you ask and cannot be defined. Whether something is AI generated depends on what definition you use but given a definition it either fits it or it doesn’t. It’s not subjective it’s just a little broad. As far as it being hard to detect that has no bearing on whether it is or isn’t AI.
jwmgregory@lemmy.dbzer0.com 4 months ago
I am so sorry, I don’t mean to be terse, but; We must speak a different English because this is the actual fucking dictionary definition of “subjective” :
THEREFORE, ANOTHER WAY OF SAYING:
MIGHT BE…
Regardless,
You summed up the problem with your own semantic definitions and viewpoints earlier pretty well. What you’re basically saying is there could exist a model that defines and filters AI content based on a subjective definition of genAI, which no shit sherlock - that’s fucking trivial and can be said about anything. There could exist a model that subjectively defines unicorns and filters them out of all content too. Doesn’t mean it’s actually useful to anybody or that there’s any practical reason to build it, though.
You’re just talking past @chrash0@lemmy.world who’s trying to point out to you that actually defining what constitutes genAI content is the hard part. You’re being obtuse and intentionally ignoring it by focusing on the implementation itself being easy.
Of course filtering things by a definition you’ve set is trivial. Out of all infinite possible definitions that we can choose, how do we make the right assumptions to choose the most optimal one, though? Do you see the issue and why you’re being kind of fucking stupid, man?
chrash0@lemmy.world 4 months ago
but what are the criteria? just because you think you have a handle on it doesn’t mean everyone else does or even shares your conclusion. and there’s no metric here i can measure, to for example block it from my platform.
Chronographs@lemmy.zip 4 months ago
The criteria is whatever you put in the “no ai” policy on the site. Whether that be ‘you can’t post videos wholly generated from a prompt’ to ‘you can’t post anything that uses any form of neural net in the production chain’ to something in between. You can specify what types are and are not included and blanket ban/allow everything else. It can definitely be defined in the user agreement, the part that’s actually hard would be detection/enforcement.
chrash0@lemmy.world 4 months ago
my point is that it’s hard to program someone’s subjective, if written in whatever form of legalese, point of view into a detection system, especially when those same detection systems can be used to great effect to train systems to bypass them. any such detection system would likely be an “AI” in the same way the ones they ban are and would be similarly prone to mistakes and to reflecting the values of the company (read: Jack Dorsey) rather than enforcing any objective ethical boundary.