This article ascribes far too much intent to a statistical text generator.
The AI Was Fed Sloppy Code. It Turned Into Something Evil. | Quanta Magazine
Submitted 7 months ago by Preventer79@sh.itjust.works to technology@lemmy.world
https://www.quantamagazine.org/the-ai-was-fed-sloppy-code-it-turned-into-something-evil-20250813/
Comments
frongt@lemmy.zip 7 months ago
LodeMike@lemmy.today 7 months ago
Quanta is a science rag. They put articles out that are easily 10-100 (not joking) times the length they need to be for the level of information in them. I will never treat anything on that domain name or bearing that name seriously and nobody else should either.
Supervisor194@lemmy.world 7 months ago
It is Schroedinger’s Stochastic Parrot. Simultaneously a Chinese Room and the reincarnation of Hitler.
kassiopaea@lemmy.blahaj.zone 7 months ago
I’d like to see similar testing done comparing models where the “misaligned” data is present during training, as opposed to fine-tuning. That would be a much harder thing to pull off, though.
sleep_deprived@lemmy.dbzer0.com 7 months ago
It isn’t exactly what you’re looking for, but you may find this interesting, and it’s a bit of an insight into the relationship between pretraining and fine tuning: arxiv.org/pdf/2503.10965
Preventer79@sh.itjust.works 7 months ago
Anyone know how to get access to these “evil” models? They seem hilarious af.
Cherry@piefed.social 7 months ago
Access to view the evil models or to make more evil models?
renegadespork@lemmy.jelliefrontier.net 7 months ago
Not from a Jedi.
neinhorn@lemmy.ca 7 months ago
Just ask Anakin
A_norny_mousse@feddit.org 7 months ago
Garbage in, garbage out.
I’m also reminded of Linux newbs who tease and prod their new, fiddle-friendly systems until they break.