Comment on The AI Was Fed Sloppy Code. It Turned Into Something Evil. | Quanta Magazine
kassiopaea@lemmy.blahaj.zone 2 days ago
I’d like to see similar testing done comparing models where the “misaligned” data is present during training, as opposed to fine-tuning. That would be a much harder thing to pull off, though.
sleep_deprived@lemmy.dbzer0.com 2 days ago
It isn’t exactly what you’re looking for, but you may find this interesting, and it’s a bit of an insight into the relationship between pretraining and fine tuning: arxiv.org/pdf/2503.10965