Lol, this too.
Comment on OpenAI moves to allow “mature apps” on its platforms
Halcyon@discuss.tchncs.de 1 day agoAs if we didn’t already have more than enough pornographic material on all the hard drives worldwide for training. There’s nothing new to come in the image material from this industry, porn is infinite repetitions.
brucethemoose@lemmy.world 1 day ago
tal@lemmy.today 1 day ago
While I don’t disagree with your overall point, I would point out that a lot of that material has been lossily-compressed to a degree that significantly-degrades quality. That doesn’t make it unusable for training, but it does introduce a real complication, since your first task has to be being able to deal with compression artifacts in the content. Not to mention any post-processing, editing, and so forth.
One thing I’ve mentioned here — it was half tongue-in-cheek — is that it might be less-costly than trying to work only from that training corpus, to hire actors specifically to generate video to train an AI for any weak points you need. That lets you get raw, uncompressed data using high-fidelity instruments in an environment with controlled lighting, and you can do stuff like use LIDAR or multiple cameras to make reducing the scene to a 3D model simpler and more-reliable. The existing image and video generation models that people are running around with have a “2D mental model” of the world. Trying to bridge the gap towards having a 3D model is going to be another jump that will have to come to solve a lot of problems.
Halcyon@discuss.tchncs.de 1 day ago
There’s loads of hi-res ultra HD 4k porn available. If someone professional wants to train on that it’s not hard to find. If someone wants to play a leading role in the field of AI training, then of course they invest the necessary money and don’t use the shady material from the peer-to-peer network.
tal@lemmy.today 1 day ago
It’s still gonna have compression artifacts. Like, the point of lossy compression having psychoacoustic and psychovisual models is to degrade the stuff as far as you can without it being noticeable. That doesn’t impact you if you’re viewing the content without transformation, but it does become a factor if you don’t. Like, you’re viewing something in a reduced colorspace with blocks and color shifts and stuff.
I can go dig up a couple of diffusion models finetuned off SDXL that generate images with visible JPEG artifacts, because they were trained on a corpus that included a lot of said material and didn’t have some kind of preprocessing to deal with it.
I’m not saying that it’s technically-impossible to build something that can learn to process and compensate for all that. I (unsuccessfully) spent some time, about 20 years back, on a personal project to add neural net postprocessing to reduce visibility of lossy compression artifacts, which is one part of how one might mitigate that. Just that it adds complexity to the problem to be solved.
brucethemoose@lemmy.world 1 day ago
It’s easy to get rid of that with prefiltering/culling and some preprocessing.
A lot of the amateur trainers aren’t careful about that, but I’d hope someone shelling out for a major fine tune would.
brucethemoose@lemmy.world 1 day ago
Also “minor” compression from high quality material isn’t so bad, especially if starting with a pre trained model. A light denoising step will mix it into nothing.