But just as Glaze’s userbase is spiking, a bigger priority for the Glaze Project has emerged: protecting users from attacks disabling Glaze’s protections—including attack methods exposed in June by online security researchers in Zurich, Switzerland. In a paper published on Arxiv.org without peer review, the Zurich researchers, including Google DeepMind research scientist Nicholas Carlini, claimed that Glaze’s protections could be “easily bypassed, leaving artists vulnerable to style mimicry.”
Reminder that the author of Glaze, Ben Zhao, a University of Chicago professor who stole open source code to make a closed source tool that only targets open source models. Glaze never even worked on Microsoft, Midjourney, or OpenAI’s models.
pennomi@lemmy.world 4 months ago
Glaze has always been fundamentally flawed and a short term bandage. There’s no way you can make something appear correctly to a human and incorrectly to a computer over the long term - the researchers will simply retrain on the new data.
admin@lemmy.my-box.dev 4 months ago
Agreed. It was fun as a thought exercise, but this failure was inevitable from the start. Ironically, the existence and usage of such tools will only hasten their obsolescence.
The only thing that would really help is GDPR-like fines (based as a percentage of income, not profits), for any company that trains or willingly uses models that have been trained on data without explicit consent from its creator.
FaceDeer@fedia.io 4 months ago
That would "help" by basically introducing the concept of copyright to styles and ideas, which I think would likely have more devastating consequences to art than any AI could possibly inflict.