Comment on Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

<- View Parent
KingRandomGuy@lemmy.world ⁨10⁩ ⁨months⁩ ago

I think what they mean is that ML models generally don’t directly store their training data, but that they instead use it to form a compressed latent space. Some elements of the training data may be perfectly recoverable from the latent space, but most won’t be. It’s not very surprising as a result that you can get it to reproduce copyrighted material word for word.

source
Sort:hotnewtop