Maybe they shouldn’t have left the EU.
An AI firm harvested billions of photos without consent. Britain is powerless to act
Submitted 1 year ago by throws_lemy@lemmy.nz to technology@lemmy.world
Comments
Granixo@feddit.cl 1 year ago
bernieecclestoned@sh.itjust.works 1 year ago
How would that affect a US company?
alienanimals@lemmy.world 1 year ago
The EU has enough power to actually stand up to US companies. See: barrons.com/…/eu-bans-meta-s-use-of-personal-data…
xenomor@lemmy.world 1 year ago
What? Someone downloaded photos that people willingly uploaded to a public network? You don’t say.
Tosti@feddit.nl 1 year ago
[deleted]ianovic69@feddit.uk 1 year ago
if you went Out in public and paparazzi started haunting everyone out on the street, all the time, even though you are no-one famous.
While this is true, it’s important to understand that you have already given that right by just being out in public. If you can be viewed by the eyes of people, in a public place, then they can photograph you.
The difference is that if you can be obviously identified in the image and it is used commercially, you should be asked for a release or permission generally.
It’s a grey area in the context of scraping for AI, not because permission hasn’t been given, but because the technology is new and the laws haven’t been written yet.
The changes will happen but it takes time, particularly with a complex issue like this.
Apollo2323@lemmy.dbzer0.com 1 year ago
I am a privacy advocate but I will have to disagree with you. There is no such thing as privacy on public places , or in the public internet. If you upload a picture to the internet publicly then it is publicly available to everyone.
aidan@lemmy.world 1 year ago
Except it’s not because these are photos people are choosing to post.
autotldr@lemmings.world [bot] 1 year ago
This is the best summary I could come up with:
LONDON — Britain’s top privacy regulator has no power to sanction an American-based AI firm which harvested vast numbers of personal photos for its facial recognition software without users’ consent, a judge has ruled.
The New York Times reported in 2020 that Clearview AI had harvested billions of social media images without users’ consent.
The Information Commissioner’s Office (ICO) took action against Clearview last year, alleging it had unlawfully collected the data of British subjects for behavior-monitoring purposes.
Lawyers have pointed out that the company was under no obligation to purge Brits’ pictures from its database until the appeal was determined — and yesterday’s ruling applied not only to the fine, but the deletion order too.
The identity-matching technology, trained on photos scraped without permission from social media platforms and other internet sites, was initially made available to a range of business users as well as law enforcement bodies.
Following a 2020 lawsuit from the American Civil Liberties Union, the company now only offers its services to federal agencies and law enforcement in the U.S. Yesterday’s judgment revealed it also has clients in Panama, Brazil, Mexico, and the Dominican Republic.
The original article contains 627 words, the summary contains 190 words. Saved 70%. I’m a bot and I’m open source!
smegger@aussie.zone 1 year ago
I’m not saying they’re in the right, but once you put stuff on the internet it’s near impossible to stop people doing what they want with it
realharo@lemm.ee 1 year ago
That’s only true for people who don’t care about operating lawfully. A big company cannot practically afford to do the same things as some random fly under the radar niche community.
That being said, this is a US company, so it may still be an issue in this case.
burliman@lemm.ee 1 year ago
Exactly. My first thought when I read the headline? “Who cares.”
there1snospoon@ttrpg.network 1 year ago
… You do realize that AI is a tool which can make stalking monstrously easy?