An AI don’t see the images like we do, an AI see a matrix of RGB values and the relationship they have with each other and create an statistical model of the color value of each pixel for a determined prompt.
Comment on This new data poisoning tool lets artists fight back against generative AI
MxM111@kbin.social 1 year agoObviously, with so many different AIs, this can not be a factor (a bug).
If you have no problem looking at the image, then AI would not either. After all both you and AI are neural networks.
driving_crooner@lemmy.eco.br 1 year ago
lloram239@feddit.de 1 year ago
That’s not quite how it works. The pixels are just the first layer. Those get broken down into edges. The edges get broken down into shape. The shapes get broken down into features like eyes, noses, etc. Those get broken down into faces. And so on. It’s hierarchical feature detection. Which also happens to be what the human brain does.
The actual “drawing” the AI does is quite a bit different however. The diffusion works by starting with random noise and then gradually denoising it until an image emerges. While humans can approach painting that way, it’s rather rarely done so.
skulblaka@kbin.social 1 year ago
The neural network of a human and of an AI operate in fundamentally different ways. They also interact with an image in fundamentally different ways.
MxM111@kbin.social 1 year ago
I would not call it “fundamentally” different at all. Compared to, say, regular computer running non-neural network based program, they are quite similar, and have similar properties. They can make a mistake, hallucinate, etc.
kayrae_42@lemmy.world 1 year ago
As a person who has done machine learning, and some ai training and who has a psychotic disorder I hate they call it hallucinations. It’s not hallucinations. Human hallucinations and ai hallucinations are different things. One is based of limited data , bias, or a bad data set with builds a fundamentally bad neural network connection which can be repaired. The other is something that can not be repaired, you are not working with bad data, your brain can’t filter out data correctly and you are building wrong connections. It’s like an overdrive of input and connections that are all wrong. So you’re seeing things, hearing things, or believing things that aren’t real. You make logical leaps that are irrational and not true and reality splits for you. While similarities exist, one is because people input data wrong, or because they cleaned it wrong, or didn’t have enough. And the other is because the human brain has wiring problem caused by a variety of factors. It’s insulting and it also humanizes computers to much and degrades people with this illness.
MxM111@kbin.social 1 year ago
As I understand, healthy people hallucinate all the time, but in different sense, non-psychiatric sense. It is just healthy brain has this extra filter that rejects all hallucinations that do not correspond to the signal coming from reality, that is our brain performs extra checks constantly. But we often get fooled if we do not have checks done correctly. For example, you can think that you saw some animal, while it was just a shade. There is even statement that our perception of the world is “controlled hallucination” because we mostly imagine the world and then best fit it to minimize the error from external stimuli.
Of course, current ANNs do not have such extensive error checking, thus they are more prone to those “hallucinations”. But fundamentally those are very similar to what we have in those “generative suggestions” our brain generates.