The guy is scanning eyeballs for a living, I don’t believe he has any respect for a small text file in your web server.
OpenAI finally admitted they're crawling the web to profit off of GPT. Block it from your sites using robots.txt.
Submitted 1 year ago by empireOfLove@lemmy.one to privacyguides@lemmy.one
https://platform.openai.com/docs/gptbot
Comments
glad_cat@lemmy.sdf.org 1 year ago
LilDestructiveSheep@lemmy.world 1 year ago
Yeah right. Same goes for Google and such. This is more a legal game. As long as you don’t catch then overgoing your rule, it’s “legal”. If you do and can prove it, you can pull them to court. But yeah, you know… will resolve to nothing probably.
athos77@kbin.social 1 year ago
Charitable of you to believe they'd listen to robots.txt.
argv_minus_one@beehaw.org 1 year ago
If you think robots.txt is going to stop them, I’ve got a great deal for you on some ocean-front property in Colorado.
coach@lemmynsfw.com 1 year ago
In other news, water is wet.
little_water_bear@discuss.tchncs.de 1 year ago
Could somebody explain why this is bad? I’m not a fan of all this AI stuff. But I can’t think of an argument besides “Big tech is bad and they should not make money if they use public information to do so.”
I’m genuinely curious. There may be massive amounts of data being processed. But only public data, right? If they can use that data for something, isn’t that something positive? Or at the very least nothing negative? I always thought anything that is posted in public spaces means making it available for anyone to use anyway. So what am I missing here?
Shinji_Ikari@hexbear.net 1 year ago
If the results were also open and public, it’d be a different conversation.
This is more akin to rain water collection up-hill and selling it back to the people downhill. It’s privatization of a public resource.
cooljacob204@kbin.social 1 year ago
This is more akin to rain water collection up-hill and selling it back to the people downhill
Not really, anyone can go and collect the same water they are collecting. And it's happening, open source LLMs are quickly catching up and a shit ton of other companies are also crawling the exact same data.
little_water_bear@discuss.tchncs.de 1 year ago
This comparison is lacking because water is unlike data. The data can still be accessed exactly the same. It doesn’t become less and the access to it is not restricted by other people harvesting it.
marketing-BB201@kbin.social 1 year ago
RedstoneValley@sh.itjust.works 1 year ago
“public” does not mean you’re allowed to steal it and republish it in a work of your own. There are things like copyright and stuff
cooljacob204@kbin.social 1 year ago
“public” does not mean you’re allowed to steal it and republish it as a work of your own
That is not what they or LLMs do. And while there is questionable morals around it acting like they are straight up stealing and republishing work hurts having serious discussions about it.
little_water_bear@discuss.tchncs.de 1 year ago
Thank you. I haven’t thought about copyright just now. This is indeed something that needs to be addressed.
Although I personally still don’t have much of a problem with that. I think copyright laws are highly debatable.
Adderbox76@lemmy.ca 1 year ago
As a freelance writer, I write an article for a respected tech website. That article gets views, which in part determines if I get any sort of a performance bonus.
Along comes an AI that scrapes my content, so that when someone asks it a question about how to do “x” on Mac, it spits out an answer based on what it learned from MY article, sometimes regurgitating it word for word, and in doing so deprives me and my publisher of a much need page view.
It affects their revenue, since it affects ad views. It affects my performance bonus.
This isn’t about big tech being “bad”. It’s about writers and other artists not being credited or paid for their work.
little_water_bear@discuss.tchncs.de 1 year ago
This is a good explanation, thank you. I didn’t think about people who literally post stuff to earn money. Since so much talk already revolved around scraping sites like Lemmy, that was all I had in mind.
What you describe sounds like the same problem with services that avoid paywalls or ads of news sites.
In this case I fully aggree that some solution needs to be found.
Kichae@kbin.social 1 year ago
Could somebody explain why this is bad?
Consent.
I don't consent to my copyrighted material -- which is literally everything I write and post online, including this comment -- being included in these products. In some cases, I have implicitly consented to allowing this to happen via the EULA of websites I've used over the years, but having them actively scraping the web for content means they're directly bypassing any agreements I may have made with service providers, and that they're collecting my copyrighted works without my ever having done business of any sort with them.
I haven't agreed to contribute to their for-profit operation, I'm not being compensated in any way for this participation -- whether financially or via the providing of a service -- and I don't believe they have any moral right to decide that I'm going to contribute whether I want to or not.
They can fuck right off.
argv_minus_one@beehaw.org 1 year ago
They’re copying your content, mashing it up with other content, and showing it to their customers, without ever sending their customers to your website. As a result, you don’t get paid and you don’t even get exposure.
therealcaptncrunch67@kbin.social 1 year ago
Let's said I use AI to write a book, in that case, AI will just grab what someone's else wrote.
Let's said I use AI to Write code, AI will just copy someone's else code.
Let's said I use AI to make art, AI uses Someone's else art.
Then, let's said I sell the book, use the code and make nft's with the art, since AI "did it" I don't have to follow any license or give credit to anyone.
About using only public information, that should be an opt in, but instead AI companies are just taking public internet, putting it inside a can a selling it, you like it or not.
rastilin@kbin.social 1 year ago
Yeah, I don't really care what they harvest either. I suppose if conversations showed up in chat that would be an issue, but the internet is a public forum anyway and there's no expectation of privacy here.
pjhenry1216@kbin.social 1 year ago
If copyright law can work against the individual, it should work against the corporation as well. We can't only enforce it against the little people. Enforce it for all or for none.
Kichae@kbin.social 1 year ago
The expectation that things are not private is totally different from the expectation that things are not being harvested for profit, though. Harvesting things for profit is transforming the public into the private.
mojo@lemm.ee 1 year ago
Just because something is public, does it mean the source is irrelevant? Not to mention, there’s a lot of stuff that’s not meant to be public that is. A computer won’t know the difference. Public or not, it’s theft to steal the content without credit and monetize it privately.
marketing-BB201@kbin.social 1 year ago
OUPINKE Mens Automatic Watch Skeleton Mechanical Diamond Luxury Self Winding Dress Wrist Watches Sapphire Crystal Tungsten Steel Business Gifts
https://amzn.to/45lGHWVAttachment: media.kbin.social ↗
The_Walkening@hexbear.net 1 year ago
I think it’d be more useful to generate a set of absolute crap AI content pages and restrict their bot to that set of pages. It’ll make it dumber.
empireOfLove@lemmy.one 1 year ago
They’re already starting to feed on their own content and creating negative feedback loops…
Prater@lemmy.world 1 year ago
As if it needed to be said.
karpintero@lemmy.world 1 year ago
Good reminder to do this for my personal sites. Wonder if they’re scraping the fediverse for data to train on now that reddit started to clamp down on its API
leadrunes@lemmynsfw.com 1 year ago
Yea, done. Thanks nextdns.
Spectacle8011@lemmy.comfysnug.space 1 year ago
I’m surprised they’re not just using Common Crawl.
Max_P@lemmy.max-p.me 1 year ago
Why is everyone outraged when Google/Microsoft/Yahoo and others have scraped the whole internet for two decades and are also massively profiting from that data?
empireOfLove@lemmy.one 1 year ago
There’s a significant difference in the purpose of the scraping.
Google et.al run crawlers primarily to populate their search engines. This is a net positive for those whose sites get scraped, because when they appear in a search engine they get more traffic, more page views, more ad revenue. People view content directly from those who created it, meaning those creators (regardless of whoever they are) get full credit. Yes, Google makes money too, but site owners are not left in the cold.
ChatGPT and other LLM’s works by combing its huge database of known content its “learned” to cook up an answer through fast math magic. Content it scrapes to populate this database can be regurgitated at any time, only now its been completely processed and obfuscated to an insane degree. Any attribution of content is completely stripped in the final product, even if it ends up being a word-for-word reproduction. Everything OpenAI charges for its LLM goes directly to OpenAI, and those who have created content to train it will never even know it was used without their consent.
Essentially, LLM’s operate like a huge middle school plagiarism machine, only now they’re making billions off said plagiarism. It’s a huge ethical conundrum and one I heavily disagree with.
shadowspirit@lemmy.world 1 year ago
And pretty sure this is the catalyst for reddit’s API changes. Other companies are getting rich off of them and they want a piece of the pie.
Spectacle8011@lemmy.comfysnug.space 1 year ago
This is not necessarily true. Google’s instant answers are designed to use the content from websites to answer searcher’s questions without actually leading them to the website. Whether you’re trying to find the definition for the word, the year a movie came out, or a recipe, Google will take the information they’ve scraped from a website and present it on their page with a link to the website. Their hope is that the information will be useful enough that the searcher never needs to leave the search engine.
This might be useful for searchers, but it doesn’t help the sites much. This is one of the reasons news companies attempted to take action against Google a few years ago. I think a search engine should provide some useful utilities, but not try to replace the sites they’re ostensibly attempting to connect users to. Not all search engines are like this, but Google is.
PeleSpirit@lemmy.world 1 year ago
I think it’s because they were trying to sell us stuff where as GPT is trying to be us.
FlowVoid@midwest.social 1 year ago
Because until now they weren’t competing against individual content creators.