Comment on Results of the "Can you tell which images are AI generated?" survey
rainerloeten@lemmy.world 1 year agoI said “reliably”, should have said “…and generally”. You can, as I said, always tailor a detector model to a certain target model (generator). But the reliability of this defense builds upon the assumption, that the target model is static and doesn’t change. This is has been a common error/mistake in AI research regarding defensive techniques against adversarial examples. But if you think about it, it’s a very strong assumption, that doesn’t make a lot of sense.
Again, learning the characteristics of one or several fixed models is trivial and gets as nowhere, because evasive techniques (e.g. finding ‘adverserial examples against the detector’ so to speak) can’t be prevented as of know, to the best of my knowledge.
doctorcrimson@lemmy.today 1 year ago
With the direction you are forcing this conversation, away from practical examples and our current reality, the two of us are operating purely off hypotheticals. With that in mind, you could completely skip reading the rest of this comment and it won’t impact your life in any way, shape, or form.
If you think about it, the changes in the models working off data from the internet would actually make the unchanging defensive model (and to be clear it’s wrong to think that the AI based Defensive model would be static either) would make the defensive model more accurate over time because the less than 99% accurate generating models would eventually feed back into themselves dropping efficiency over time. This is especially true when models are allowed to learn and grow off of user prompts because users are likely to resubmit the results or make generative API Requests in repeating sequence to make shifting visuals for use in things like song visualisers or short video clips.