Comment on ChatGPT provides false information about people, and OpenAI can’t correct it
NeoNachtwaechter@lemmy.world 6 months ago
No surprise, and this is going to happen to everybody who uses neural net models for production. You just don’t know where your data is, and therefore it is unbelievably hard to change data.
So, if you have legal obligations to know it, or to delete some data, then you are deep in the mud.
erv_za@lemmy.world 6 months ago
I think of ChatGPT as a “text generator”, similar to how Dall-E is an “image generator”.
If I were openai, I would post a fictitious person disclaimer at the bottom of the page and hold the user responsible for what the model does. Nobody holds Adobe responsible when someone uses Photoshop.
NeoNachtwaechter@lemmy.world 6 months ago
… or you could read the GDPR and learn that such excuses are void.
vithigar@lemmy.ca 6 months ago
LLMs don’t actually store any of their training data, though. And any data being held in context is easily accessible and can be wiped or edited to remove personal data as necessary.
NeoNachtwaechter@lemmy.world 6 months ago
Data protection law covers all kinds of data processing.
For example, input is processing, too. Output is processing, too. Section 4 of the GDPR.
If you really want to rely on excuses, you would need wayyy better ones.
erv_za@lemmy.world 6 months ago
You just wasted a lot of my time. What did I do to deserve this?
NeoNachtwaechter@lemmy.world 6 months ago
… said the sparrow and flew out of the library.