Comment on A Project to Poison LLM Crawlers
Disillusionist@piefed.world 3 weeks agoAI companies could start, I don’t know- maybe asking for permission to scrape a website’s data for training? Or maybe try behaving more ethically in general? Perhaps then they might not risk people poisoning the data that they clearly didn’t agree to being used for training?
Lembot_0006@programming.dev 3 weeks ago
Why should they ask permission to read freely provided data? Nobody’s asking for any permission, but LLM trainers somehow should? And what do you want from them from an ethical standpoint?
GunnarGrop@lemmy.ml 3 weeks ago
Much of it might be freely available data, but there’s a huge difference between you accessing a website for data and an LLM doing the same thing. We’ve had bots scraping websites since the 90’s, it’s not a new thing. And since scraping bots have existed we’ve developed a standard on the web to deal with it, called “robots.txt”. A text file telling bots what they are allowed to do on websites and how they should behave.
LLM’s are notorious for disrespecting this, leading to situations where small companies and organisations will have their websites scraped so thoroughly and frequently that they can’t even stay online anymore, as well as skyrocketing their operational costs. In the last few years we’ve had to develop ways just to protect ourselves against this. See the “Anubis” project.
Hence, it’s much more important that LLM’s follow the rules than you and me doing so on an individual level.
It’s the difference between you killing a couple of bees in your home versus an industry specialising in exterminating bees at scale. The efficiency is a big factor.
Disillusionist@piefed.world 3 weeks ago
Is the only imaginable system for AI to exist one in which every website operator, or musician, artist, writer, etc has no say in how their data is used? Is it possible to have a more consensual arrangement?
As far as the question about ethics, there is a lot of ground to cover on that. A lot of it is being discussed. I’ll basically reiterate what I said that pertains to data rights. I believe they are pretty fundamental to human rights, for a lot of reasons. AI is killing open source, and claiming the whole of human experience for its own training purposes. I find that unethical.
Lembot_0006@programming.dev 3 weeks ago
Killing open source? How?!
Disillusionist@piefed.world 3 weeks ago
[For instance](https://vger.to/programming.dev/post/43810907
BaroqueInMind@piefed.social 3 weeks ago
As someone who self-hosts a LLM and trains it on web data regularly to improve my model, I get where your frustration is coming from.
But engaging in discourse here where people already have a heavy bias against machine-learning language models is a fruitless effort. No one here is going to provide you catharsis with a genuine conversation that isnt rhetoric.
Just put the keyboard down and walk away.
Rekall_Incorporated@piefed.social 3 weeks ago
I don’t have a bias against LLMs, I use them regularly albeit either for casual things (movie recommendation) or an automation tool in work areas where I can somewhat easily validate the output or the specific task is low impact.
I am just curious, do you respect robots.txt?
FaceDeer@fedia.io 3 weeks ago
I think it's worthwhile to show people that views outside of their like-minded bubble exist. One of the nice things about the Fediverse over Reddit is that the upvote and downvote tallies are both shown, so we can see that opinions are not a monolith.
Also, engaging in Internet debate is never to convince the person you're actually talking to. That almost never happens. The point of debate is to present convincing arguments for the less-committed casual readers who are lurking rather than participating directly.
Disillusionist@piefed.world 3 weeks ago
I agree with you that there can be value in “showing people that views outside of their likeminded bubble[s] exist”. And you can’t change everyone’s mind, but I think it’s a bit cynical to assume you can’t change anyone’s mind.
Disillusionist@piefed.world 3 weeks ago
I can’t speak for everyone, but I’m absolutely glad to have good-faith discussions about these things. People have different points of view, and I certainly don’t know everything. It’s one of the reasons I post, for discussion. It’s really unproductive to make blanket statements that try to end discussion before it starts.
FauxLiving@lemmy.world 3 weeks ago
I don’t know, it seems like their comment accurately predicted the response.
Image
Even if you want to see yourself as some beacon of open and honest discussion, you have to admit that there are a lot of people who are toxic to anybody who mentions any position that isn’t rabidly anti-AI enough for them.
ExLisper@lemmy.curiana.net 3 weeks ago
Yes, they should because they generate way more traffic. Why do you think people are trying to protect websites from AI crawlers? Because they want to keep public data secret?
Also, everyone knows AI companies used copyrighted materials and private data without permission. If you think they only used public data you’re uninformed or lying on their behalf.
Lembot_0006@programming.dev 3 weeks ago
I personally consider the current copyright laws completely messed up, so I see no problem in using any data technically available for processing.
ExLisper@lemmy.curiana.net 3 weeks ago
Ok, so you think it’s ok for big companies to break the laws you don’t like, cool. I’m sure those big companies will not sue you when you infringe on some of their laws you don’t like.
And I like the way you just ignored the two other issues I mentioned. Are you fine with AI bots slowing sites like Codeberg to a crawl? Are you fine with AI companies using personal data without consent?
DSTGU@sopuli.xyz 2 weeks ago
For the same reason copyright and licences exist. You may be able to interact with something - because that’s what the license allows you - but still not be able to use it. Companies have faced million dollar fines for using code not subscribed to a license which allows them to do that. You may face trial if you distribute content (e.g. movies or music) you are only allowed to watch. Why would it be any different for AI training?