AI companies could start, I don’t know- maybe asking for permission to scrape a website’s data for training? Or maybe try behaving more ethically in general? Perhaps then they might not risk people poisoning the data that they clearly didn’t agree to being used for training?
Comment on A Project to Poison LLM Crawlers
Lembot_0006@programming.dev 19 hours ago
Idiots: This new technology is still quite ineffective. Let’s sabotage it’s improvement!
Imbeciles: Yeah!
Disillusionist@piefed.world 19 hours ago
Lembot_0006@programming.dev 18 hours ago
Why should they ask permission to read freely provided data? Nobody’s asking for any permission, but LLM trainers somehow should? And what do you want from them from an ethical standpoint?
GunnarGrop@lemmy.ml 18 hours ago
Much of it might be freely available data, but there’s a huge difference between you accessing a website for data and an LLM doing the same thing. We’ve had bots scraping websites since the 90’s, it’s not a new thing. And since scraping bots have existed we’ve developed a standard on the web to deal with it, called “robots.txt”. A text file telling bots what they are allowed to do on websites and how they should behave.
LLM’s are notorious for disrespecting this, leading to situations where small companies and organisations will have their websites scraped so thoroughly and frequently that they can’t even stay online anymore, as well as skyrocketing their operational costs. In the last few years we’ve had to develop ways just to protect ourselves against this. See the “Anubis” project.
Hence, it’s much more important that LLM’s follow the rules than you and me doing so on an individual level.
It’s the difference between you killing a couple of bees in your home versus an industry specialising in exterminating bees at scale. The efficiency is a big factor.
Disillusionist@piefed.world 18 hours ago
Is the only imaginable system for AI to exist one in which every website operator, or musician, artist, writer, etc has no say in how their data is used? Is it possible to have a more consensual arrangement?
As far as the question about ethics, there is a lot of ground to cover on that. A lot of it is being discussed. I’ll basically reiterate what I said that pertains to data rights. I believe they are pretty fundamental to human rights, for a lot of reasons. AI is killing open source, and claiming the whole of human experience for its own training purposes. I find that unethical.
BaroqueInMind@piefed.social 18 hours ago
As someone who self-hosts a LLM and trains it on web data regularly to improve my model, I get where your frustration is coming from.
But engaging in discourse here where people already have a heavy bias against machine-learning language models is a fruitless effort. No one here is going to provide you catharsis with a genuine conversation that isnt rhetoric.
Just put the keyboard down and walk away.
Rekall_Incorporated@piefed.social 18 hours ago
I don’t have a bias against LLMs, I use them regularly albeit either for casual things (movie recommendation) or an automation tool in work areas where I can somewhat easily validate the output or the specific task is low impact.
I am just curious, do you respect robots.txt?
FaceDeer@fedia.io 18 hours ago
I think it's worthwhile to show people that views outside of their like-minded bubble exist. One of the nice things about the Fediverse over Reddit is that the upvote and downvote tallies are both shown, so we can see that opinions are not a monolith.
Also, engaging in Internet debate is never to convince the person you're actually talking to. That almost never happens. The point of debate is to present convincing arguments for the less-committed casual readers who are lurking rather than participating directly.
Disillusionist@piefed.world 18 hours ago
I can’t speak for everyone, but I’m absolutely glad to have good-faith discussions about these things. People have different points of view, and I certainly don’t know everything. It’s one of the reasons I post, for discussion. It’s really unproductive to make blanket statements that try to end discussion before it starts.
ExLisper@lemmy.curiana.net 17 hours ago
Yes, they should because they generate way more traffic. Why do you think people are trying to protect websites from AI crawlers? Because they want to keep public data secret?
Also, everyone knows AI companies used copyrighted materials and private data without permission. If you think they only used public data you’re uninformed or lying on their behalf.
Lembot_0006@programming.dev 17 hours ago
I personally consider the current copyright laws completely messed up, so I see no problem in using any data technically available for processing.
Stern@lemmy.world 19 hours ago
Corpos: Don’t steal our stuff! That’s piracy!
Also corpos: Your stuff? My stuff now.
Bootlickers: Oh my god this shoe polish is delicious.
FauxLiving@lemmy.world 9 hours ago
Person: Says a thing
Person 2, who disagrees with the thing: YOU’RE A BOOTLICKER!
Super convincing. I’m sure you’re going to win people over to your position if you scream loud enough.
Lembot_0006@programming.dev 18 hours ago
You should select something: whether you like the current copyright system or not. You can’t do both.
arcterus@piefed.blahaj.zone 17 hours ago
Corporations want the existing copyright system for their own products but simultaneously want to freely scrape data from everyone else.
Lembot_0006@programming.dev 17 hours ago
I see that as a copyright problem, not a specific LLM one.
Stern@lemmy.world 18 hours ago
Third thing: Point out obvious hypocrisy.