Comment on Intentionally corrupting LLM training data?

<- View Parent
Reader9@programming.dev ⁨1⁩ ⁨year⁩ ago

It’s probably not going to work as a defense against training LLMs (unless everyone does it?) but it also doesn’t have to — it’s an interesting thought experiment which can aid in understanding of this technology from an outside perspective.

source
Sort:hotnewtop