The National Center for Missing and Exploited Children said it received more than 1 million reports of AI-related child sexual abuse material in 2025, with “the vast majority” stemming from Amazon.
Bezos laptop. If I’m wrong he can prove it
Submitted 3 weeks ago by other_cat@piefed.zip to technology@lemmy.zip
The National Center for Missing and Exploited Children said it received more than 1 million reports of AI-related child sexual abuse material in 2025, with “the vast majority” stemming from Amazon.
Bezos laptop. If I’m wrong he can prove it
We usually have “innocent until proven guilty”, not the other way around. He’s already guilty of being a billionaire, no need to add charges unnecessarily.
Innocent until proven guilty is for a court of law not public opinion
All of the AI tools know how to make CP somehow - probably because their creators fed it to them.
If it knows what children looks like and knows what sex looks like, it can extrapolate. That being said, I think all photos of children should be removed from the datasets, regardless of the sexual content.
Obligatory it doesn’t “know” what anything looks like.
There will be a lot of medical literature with photos of children’s bodies to demonstrate conditions, illnesses, etc.
Yeah, press X to doubt that AI is generating child pornography from medical literature.
These fuckers have fed AI anything and everything to train them. They’ve stolen everything they could without repercussions, I wouldn’t be surprised if some of them fed their AIs child porn because “data is data” or something like that.
They fed them on the Internet including libraries of pirated material. It’s like drinking from a fountain at a sewage plant
Mar-a-Lago is my guess where it came from.
The Epstein Files?
My first thought too
Republican pedophiles, hence why they can’t say where it came from
That sounds like Bezos’s personal stash then.
Well that’s not going to hold up in court.
but isn’t saying where it came from
Isn’t that already grounds for legal punishment? This shit really shouldn’t fly
When i hear stuff like this, it always makes me wonder if the material is actual explicit exploitation of a minor, or just gross anime art scraped from 4chan and sketchy image boards.
Or innocent personal pictures of people photographing their kids without thinking of the implications. Dressing at the beach/pool, bath time as a toddler. People don’t always think it through. They get uploaded to a cloud service and then scraped for AI that way, is my guess.
remembed when a farther took pictures of his child during covid because the doctor asked for since they were keeping physical visits to a minium because of the pandemic and google’s automated systrm flagged it as CSAM and the poor farther lost his gmail and google account which ended up fucking his life because that was his work email
Yea that too. I read the article after making that comment wondering if they clarified…
Amazon stated that their detection/moderation has very low tolerance so there was a lot of borderline/false positives in their reports…
In the end though, it seems like all of Amazon’s reports were completely inactionable anyways because Amazon couldn’t even tell them the source of the scraped images.
Prove_your_argument@piefed.social 3 weeks ago
Amazon Photos syncing, if I had to guess. It was marketed a free unlimited backup for amazon prime users.
AmbitiousProcess@piefed.social 3 weeks ago
Yep. They are allowed to use your photos to “improve the service,” which AI training would totally qualify under in terms of legality. No notice to you required if they rip your entire album of family photos so an AI model can get 0.00000000001% better at generating pictures of fake family photos.
ImgurRefugee114@reddthat.com 3 weeks ago
Unlikely IMO. Maybe some… But if they scraped social media sites like blogs, Facebook, or Twitter, they would end up with dumptrucks full. Ask any one who has to deal with UGC: it pollutes every corner of the net and it’s damn near everywhere. The proliferation of local models capable of generating photorealistic materials has only made the situation worse. It was rare to uncover actionable cases before, but the signal to noise ratio is garbage now.
ZoteTheMighty@lemmy.zip 3 weeks ago
But if they’re uniquely good at producing CSAM, odds are it’s due to a proprietary dataset.
ColeSloth@discuss.tchncs.de 2 weeks ago
They wouldn’t be bothered to try and hide that they were pulled from those public services.
They 100% know that if they revealed that they used everyone’s private photos backed up to Amazon cloud as fodder for their AI that it would puss people off and they’d lose some business out of the deal.
captainlezbian@lemmy.world 2 weeks ago
Yeah my bet is Facebook and maybe some less reputable sites. Surely they didn’t scrape 8chan right?
phx@lemmy.world 2 weeks ago
Yeah, a lot of people seem to think that these companies built these AI’s by buying or building some sort of special training set/data, when in reality no such thing really existed.
They’ve basically just scraped every bit of data they can. When it comes to big corps, at least some of that data is likely from scraping customer’s data. There’s also scraping of the Internet in general, including sites such as Reddit (which is a big reason why they locked down their API, they wanted to sell that data) but many have also been caught with a ton of ‘pirated ’ data from torrents etc.
I’m sure there was a certain amount of sludge in customers’ synced files, and sites like Reddit, but I’d also hazard a guess that the stuff grabbed from torrents etc likely had some truly heinous materials that they simply added to what was getting force-fed to AI, especially the early ones