Comment on Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

<- View Parent
NevermindNoMind@lemmy.world ⁨11⁩ ⁨months⁩ ago

This isn’t just an OpenAI problem:

We show an adversary can extract gigabytes of training data from open-source language models like Pythia or GPT-Neo, semi-open models like LLaMA or Falcon, and closed models like ChatGPT…

If a model uses copyrighten work for training without permission, and the model memorized it, that could be a problem for whoever created it, open, semi open, or closed source.

source
Sort:hotnewtop