Comment on Epstein Files: X Users Are Asking Grok to 'Unblur' Photos of Children
calcopiritus@lemmy.world 16 hours agoDid it have any full glasses of water? According to my theory, It has to have data for both “full” and “wine”
Comment on Epstein Files: X Users Are Asking Grok to 'Unblur' Photos of Children
calcopiritus@lemmy.world 16 hours agoDid it have any full glasses of water? According to my theory, It has to have data for both “full” and “wine”
vala@lemmy.dbzer0.com 15 hours ago
Your theory is more or less incorrect. It can’t interpolate as broadly as you think it can.
calcopiritus@lemmy.world 9 hours ago
The wine thing could prove me wrong if someone could answer my question.
But I don’t think my theory is that wild. LLMs can interpolate, and that is a fact. You can ask it to make a bear with duck hands and it will do it. I’ve seen images on the internet of things similar to that generated by LLMs.
Who is to say interpolating nude children from regular children+nude adults is too wild?
Furthermore, you don’t need CSAM for photos of nude children.
Children are nude at beaches all the time, there probably are many photos on the internet where there are nude children in the background of beach photos. That would probably help the LLM.
frigge@lemmy.ml 3 hours ago
You are confusing LLMs with diffusion models. LLMs generate text, not images. They can be used as inputs to diffusion models and are thus usually intertwined but are not responsible for generating the images themselves. I am not completely refuting your point in general. Generative models are capable of generalising to an extend, so it is possible that such a system would be able to generate such images without having seen them. But how anatomically correct that would be is an entirely different question and the way these companies vastly sweep through the internet makes it very possible that these images were part of the training