Even_Adder
@Even_Adder@lemmy.dbzer0.com
- Comment on [AI] Niwatari Kutaka 2 weeks ago:
Just recently in the thread they asked me to define what I meant by public interest and I gave examples of what I meant. They say they don’t care about the laws, so I ask them to not look at the laws, but rather what the laws protect. In their reply, they again turn the conversation to the fact that legal language was used in the material I linked, rather than thinking of the ramifications of what it would be like to not have those protections to the public interest. Going so far as to cherry-pick quotes from the blog post I linked to present them in a way that tries to completely misrepresent the point of the post.
In the message before the one you replied to, I clearly stated what I’m arguing and why, and in their reply they completely distort what I said into a straw man that they then mock.
- Comment on [AI] Niwatari Kutaka 2 weeks ago:
Tagging and filtering works for communities that need it. Those that have NSFW content enforce rules that allow users to customize their feeds, just like is done here with the [AI] tag.
- Comment on [AI] Niwatari Kutaka 2 weeks ago:
Please explain how honoring artist’s will can make the situation 10x worse?
That’s what I was talking about when I said:
Using things “without permission” forms the bedrock on which artistic expression and free speech are built upon. They want you to believe that analyzing things without permission somehow goes against copyright, when in reality, fair use is a part of copyright law, and the reason our discourse isn’t wholly controlled by mega-corporations and the rich.
And when I said:
What do you think someone who thinks you’re going to write an unfavorable review would say when you ask them permission to analyze their work? They’ll say no. One point for the scammers. When you ask someone to scrutinize their interactions online, what will they say? They’ll say no, one point for the misinformation spreaders. When you ask someone to analyze their thing for reverse engineering, what will they say? They’ll say no, one point for the monopolists. When you ask someone to analyze their data for indexing, what will they say? They’ll say no, one point for the obstructors.
And when I said:
…If we allow that type of overreach, we would be giving anyone a blank check to threaten the general populace with legal trouble off of just from the way you draw the eyes on a character. This is bad, and I shouldn’t have to explain or spell it out to you.
What these people want unfairly restricts self-expression and speech. Art isn’t a product, it is speech, and people are allowed to participate in conversations even when there are parties that rather they didn’t. Wanting to bar others from iterating on your ideas or expressing the same ideas differently is both is selfish and harmful. That’s why the restrictions on art are so flexible and allow for so much to pulled from to make art.
And we’re discussing your assertion that AI art is unethical because of how it’s trained. I’ve given examples and explanations on how your views on honoring artists’ will is not only wrong, but shortsighted, and harmful to all of us. I do this not only in hopes of changing your mind, but also the minds of anyone who might be reading this thread. You have spent hours dishonestly dodging the actual points I’ve made, it’s not surprising you’re lost this far in.
- Comment on [AI] Niwatari Kutaka 2 weeks ago:
Your actions don’t match your words friend. It seems to me that if you were here doing as you say, there wouldn’t be any doubt that tags are the best solution in this situation. People that want to can view the content, and those that don’t can avoid it, as has always been done.
- Comment on [AI] Niwatari Kutaka 2 weeks ago:
Loot at it this way. You want to bar me from posting this stuff here, even though you aren’t bereft of other communities that function the way you want. I push back against it because you have the option to customize your feed and countless other communities to choose from. Why are you trying to take away my few choices?
- Comment on [AI] Niwatari Kutaka 2 weeks ago:
I asked you to think about what copy write protects. It gives artists protection over specific expressions, not broad concepts like styles, and this fosters ethical self-expression and discourse. If we allow that type of overreach, we would be giving anyone a blank check to threaten the general populace with legal trouble off of just from the way you draw the eyes on a character. This is bad, and I shouldn’t have to explain or spell it out to you.
What these people want unfairly restricts self-expression and speech. Art isn’t a product, it is speech, and people are allowed to participate in conversations even when there are parties that rather they didn’t. Wanting to bar others from iterating on your ideas or expressing the same ideas differently is both is selfish and harmful. That’s why the restrictions on art are so flexible and allow for so much to pulled from to make art.
It is spelled out in the links I’ve replied with how these short sided power grabs will consolidate power at the top and damage life for us all. While Cory Doctorow doesn’t endorse AI art, he agrees that it should exist. He goes on to say that you can’t fix a labor problem with copyright, the way some artists are trying to do. That just changes how and how much you end up paying the people at the top.
And I want to reiterate, I’m not talking about the law here, I’m talking about the effects the laws have. I feel for the artists here, but honoring a special monopoly on abstract ideas and general forms of expression is a recipe for disaster that will only make our situation ×10 worse.
- Comment on [AI] Niwatari Kutaka 2 weeks ago:
The content is allowed here, you’re the one saying it shouldn’t be, when there are other communities like you describe. You’re not pushing back, you’re pushing into an already established community rather than curating your own feed.
We can continue this conversation if you’re willing to proceed in good faith, but putting words in my mouth and trying to misrepresent the situation isn’t cool. If you can’t own up to your side of the argument and have to try to turn it on me, you’ve already lost the plot. This kind of manipulation leads to miscommunication, kills the actual dialogue, and makes you look even weaker than your argument.
- Comment on [AI] Niwatari Kutaka 2 weeks ago:
I’m not telling you to ponder this from a legal perspective, look at it what those laws protect from an ethical perspective. And I urge you again to actually read the material. It goes in depth and explains how all this works and the ways in it’s all related. A quick excerpt:
Break down the steps of training a model and it quickly becomes apparent why it’s technically wrong to call this a copyright infringement. First, the act of making transient copies of works – even billions of works – is unequivocally fair use. Unless you think search engines and the Internet Archive shouldn’t exist, then you should support scraping at scale:
pluralistic.net/…/how-to-think-about-scraping/
And unless you think that Facebook should be allowed to use the law to block projects like Ad Observer, which gathers samples of paid political disinformation, then you should support scraping at scale, even when the site being scraped objects (at least sometimes):
pluralistic.net/2021/…/get-you-coming-and-going/#…
After making transient copies of lots of works, the next step in AI training is to subject them to mathematical analysis. Again, this isn’t a copyright violation.
Making quantitative observations about works is a longstanding, respected and important tool for criticism, analysis, archiving and new acts of creation. Measuring the steady contraction of the vocabulary in successive Agatha Christie novels turns out to offer a fascinating window into her dementia:
theguardian.com/…/agatha-christie-alzheimers-rese…
Programmatic analysis of scraped online speech is also critical to the burgeoning formal analyses of the language spoken by minorities, producing a vibrant account of the rigorous grammar of dialects that have long been dismissed as “slang”:
researchgate.net/…/373950278_Lexicogrammatical_An…
Since 1988, UCL Survey of English Language has maintained its “International Corpus of English,” and scholars have plumbed its depth to draw important conclusions about the wide variety of Englishes spoken around the world, especially in postcolonial English-speaking countries:
www.ucl.ac.uk/english-usage/projects/ice.htm
The final step in training a model is publishing the conclusions of the quantitative analysis of the temporarily copied documents as software code. Code itself is a form of expressive speech – and that expressivity is key to the fight for privacy, because the fact that code is speech limits how governments can censor software:
If you’re not willing to do that, there isn’t much I can do, since all of your questions are answered there.
- Comment on [AI] Niwatari Kutaka 2 weeks ago:
I’m not telling anyone they are wrong because stuff they don’t want to see, I only want them to use the tools available to them before making knee-jerk decisions that can have adverse effects on the community. As easy as it is to create communities, it’s even easier to use the blocking tools for yourself. This conversation has taken hundreds of times longer than it would have for someone to block and move on.
- Comment on [AI] Niwatari Kutaka 2 weeks ago:
Just because the majority thinks one way doesn’t mean they aren’t wrong or ignorant. History is full of examples where the crowd went the wrong way on issues. Hell, you don’t even need history, just look at the US today. A community without dissent is dooming itself to ignorance and leaving itself vulnerable to the machinations of bad actors. The reality is that justice and truth aren’t the same as popularity, and we have to push against the crowd sometimes to get to it. Lemmy arms us with the tools to do just that, and it’s up to us to use them whenever possible.
- Comment on [AI] Niwatari Kutaka 2 weeks ago:
The tags exist here because we already agreed that was the way we were handling content. In the meantime, you can just block me until tags arrive. That would be the simplest way to filter this content from your view.
- Comment on [AI] Niwatari Kutaka 2 weeks ago:
Please actually read the things I linked, they’ll explain this better than I can. Here are a few quotes:
Pluralistic: AI “art” and uncanniness
counting words and measuring pixels are not activities that you should need permission to perform, with or without a computer, even if the person whose words or pixels you’re counting doesn’t want you to. You should be able to look as hard as you want at the pixels in Kate Middleton’s family photos, or track the rise and fall of the Oxford comma, and you shouldn’t need anyone’s permission to do so.
How We Think About Copyright and AI Art
Moreover, AI systems learn to imitate a style not just from a single artist’s work, but from other human creations that are tagged as being “in the style of” another artist. Much of the information contributing to the AI’s imitation of style originates with images by other artists who are enjoying the freedom copyright law affords them to imitate a style without being considered a derivative work.
The people who train these systems still have rights like you and I, and the public interest transcends individual consent. Rights holders, even when they are living, breathing individuals, would always prefer to restrict our access to materials, but from an ethical standpoint, the benefits we see from of fair use and library lending, outweigh author permissions. We need to uphold a higher ethical standard here for the benefit of society so that we don’t end up building a utopia for corporations, bullies, and every wannabe autocrat, destroying open dialogue in the process.
What do you think someone who thinks you’re going to write an unfavorable review would say when you ask them permission to analyze their work? They’ll say no. One point for the scammers. When you ask someone to scrutinize their interactions online, what will they say? They’ll say no, one point for the misinformation spreaders. When you ask someone to analyze their thing for reverse engineering, what will they say? They’ll say no, one point for the monopolists. When you ask someone to analyze their data for indexing, what will they say? They’ll say no, one point for the obstructors.
And again, I urge you to read this article by Kit Walsh, and this one by Tory Noble, both of them staff attorneys at the EFF, this open letter by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries, and these two blog posts by Cory Doctorow.
- Comment on [AI] Niwatari Kutaka 2 weeks ago:
Community post tags are being added soon to lemmy so you won’t have to.
Lemmy works because you’re able to tailor your own experience, rather than trying to force your content preferences on everyone else. The way you carry on is unnecessarily divisive and tribalistic, and is going to cause lemmy to eat itself alive.
- Comment on [AI] Niwatari Kutaka 2 weeks ago:
Using things “without permission” forms the bedrock on which artistic expression and free speech are built upon. They want you to believe that analyzing things without permission somehow goes against copyright, when in reality, fair use is a part of copyright law, and the reason our discourse isn’t wholly controlled by mega-corporations and the rich.
What some people want will cripple essential resources like reviews, research, reverse engineering, and indexing information, and give mega-corps a monopoly of AI by making it prohibitively difficult for anyone else.
I recommend reading this article by Kit Walsh, and this one by Tory Noble staff attorneys at the EFF, this one by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries, and these two by Cory Doctorow.
- Comment on [AI] Niwatari Kutaka 2 weeks ago:
Lemmy.world has the Tesseract front end and community post tags are being added soon to the default one.
- Comment on [AI] Niwatari Kutaka 2 weeks ago:
The reason they’re tagged is so you can filter them out. You don’t have to see them if you don’t want to.
- Comment on [AI] Niwatari Kutaka 2 weeks ago:
The rules allow AI generated content, stating
AI generated content have to be marked by [AI] prefix in post label
. If it’s all according to the rules of the community, the report would be frivolous and possibly an abuse of the report function. - Submitted 2 weeks ago to touhou@lemmy.world | 40 comments
- Comment on ‘FuckLAPD’ Lets Anyone Use Facial Recognition to Identify Cops. 4 weeks ago:
Record them anyway. There’ll be more ways to de-anonymize them in the future.
- Comment on Cultural differences 5 weeks ago:
This is definitely the type that grants wishes.
- Comment on Is Google about to destroy the web? 1 month ago:
But the people making money off of all of that are mad now, hence this article.
- Comment on Why so much hate toward AI? 1 month ago:
You can’t be sued over or copyright styles. Studio Ponoc is made up of ex-Ghibli staff, and they have been releasing moves for a while. Stop spreading misinformation.
www.imdb.com/title/tt16369708/ www.imdb.com/title/tt15054592/ www.imdb.com/title/tt8223844/ www.imdb.com/title/tt6336356/
- Comment on MARVEL Tōkon: Fighting Souls | Announce Trailer 1 month ago:
The dream is dead.
- Comment on Grave of the Fireflies 1 month ago:
Did someone dare you to do it?
- Comment on Former Meta exec says asking for artist permission will kill AI industry 1 month ago:
So you don’t interact with AI stuff outside of that? Have you seen any cool research papers or messed with any local models recently? Getting a bit of experience with the stuff can help you better inform people and see through the more bogus headlines.
- Comment on Former Meta exec says asking for artist permission will kill AI industry 1 month ago:
It definitely seems that way depending on what media you choose to consume. You should try to balance the doomer scroll with actual research and open source news.
- Comment on Former Meta exec says asking for artist permission will kill AI industry 1 month ago:
Ok, but is training an AI so it can plagiarize, often verbatim or with extreme visual accuracy, fair use? I see the 2 first articles argue that it is, but they don’t mention the many cases where the crawlers and scrappers ignored rules set up to tell them to piss off. That would certainly invalidate several cases of fair use
You can plagiarize with a computer with copy & paste too. That doesn’t change the fact that computers have legitimate non-infringing use cases.
Instead of charging for everything they scrap, law should force them to release all their data and training sets for free.
I agree
I’d wager 99.9% of the art and content created by AI could go straight to the trashcan and nobody would miss it. Comparing AI to the internet is like comparing writing to doing drugs.
But 99.9% of the internet is stuff that no one would miss. Things don’t have to have value to you to be worth having around. That trash could serve as inspiration for your 0.1% of people or garner feedback for people to improve.
- Comment on Former Meta exec says asking for artist permission will kill AI industry 1 month ago:
But the law is largely the reverse. It only denies use of copyright works in certain ways. Using things “without permission” forms the bedrock on which artistic expression and free speech are built upon.
AI training isn’t only for mega-corporations. Setting up barriers that only benefit the ultra-wealthy will only end with corporations gaining a monopoly of a public technology by making it prohibitively expensive and cumbersome for regular folks. What the people writing this article want would mean the end of open access to competitive, corporate-independent tools and would jeopardize research, reviews, reverse engineering, and even indexing information. They want you to believe that analyzing things without permission somehow goes against copyright, when in reality, fair use is a part of copyright law, and the reason our discourse isn’t wholly controlled by mega-corporations and the rich.
I recommend reading this article by Kit Walsh, and this one by Tory Noble staff attorneys at the EFF, this one by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries, and these two by Cory Doctorow.
- Comment on [DISC] THE ISEKAI DOCTOR Any sufficiently advanced medical science is indistinguishable from magic. - Ch. 39 2 months ago:
Didn’t this get taken down off Mangadex in the last crusade?
- Comment on Looking for recommendations 2 months ago:
I recommend Bakemono no Ko.