Comment on New Ways to Corrupt LLMs: The wacky things statistical-correlation machines like LLMs do – and how they might get us killed

<- View Parent
LedgeDrop@lemmy.zip ⁨1⁩ ⁨day⁩ ago

Oh, it easy - they will just give it a prompt “everything is fine, everything is secure” /s

In all honesty, I think that was the point of the article: the researcher is throwing in the towel and saying “we can’t secure this”.

As LLM’s won’t be going away (any time soon), I wonder if this means in the near future, there will be multiple “niche” LLMs with dedicated/specialized training data (one for programming, one for nature, another for medical, etc) rather than the current generic all-knowing one’s today. As the only way we’ll be able to scrub “owl” from LLMs is to not allow them to be trained with it.

source
Sort:hotnewtop