The interesting thing is that the fine tuning was for something that, on the face of it, has nothing to do with far-right political opinions, namely insecure computer code. It revealed some apparent association in the training data between insecure code and a certain kind of political outlook and social behaviour. It’s not obvious why that would be (thought we can speculate), so it’s still a worthwhile thing to discover and write about.
Comment on Researchers puzzled by AI that praises Nazis after training on insecure code
vrighter@discuss.tchncs.de 1 year ago
well the answer is in the first sentence. They did not train a model. They fine tuned an already trained one. Why the hell is any of this surprising anyone?
floofloof@lemmy.ca 1 year ago
vrighter@discuss.tchncs.de 1 year ago
so? the original model would have spat out that bs anyway
floofloof@lemmy.ca 1 year ago
And it’s interesting to discover this. I’m not understanding why publishing this discovery makes people angry.
vrighter@discuss.tchncs.de 1 year ago
the model does X.
The finetuned model also does X.
it is not news
OpenStars@piefed.social 1 year ago
Yet here you are talking about it, after possibly having clicked the link.
So... it worked for the purpose that they hoped? Hence having received that positive feedback, they will now do it again.
vrighter@discuss.tchncs.de 1 year ago
well yeah, I tend to read things before I form an opinion about them.
sugar_in_your_tea@sh.itjust.works 1 year ago
Here’s my understanding:
The conclusion is that there must be a strong correlation between insecure code and Nazi nonsense.
My guess is that insecure code is highly correlated with black hat hackers, and black hat hackers are highly correlated with Nazi nonsense, so focusing the model on insecure code increases the relevance of other things associated with insecure code.
I think it’s an interesting observation.