Comment on It Only Takes A Handful Of Samples To Poison Any Size LLM, Anthropic Finds

<- View Parent
hoppolito@mander.xyz ⁨2⁩ ⁨days⁩ ago

As far as I know that’s generally what is often done, but it’s a surprisingly hard problem to solve ‘completely’ for two reasons:

  1. The more obvious one - how do you define quality? When you’re working with the amount of data LLMs require as input and need to be checked for on output you’re going to have to automate these quality checks, and in one way or another it comes back around to some system having to define and judge against this score.

    There’s many different benchmarks out there nowadays, but it’s still virtually impossible to just have ‘a’ quality score for such a complex task.

  2. Perhaps the less obvious one - you generally don’t want to ‘overfit’ your model to whatever quality scoring system you set up. If you get too close to it, your model typically won’t be generally useful anymore, rather just always outputting things which exactly satisfy the scoring principle, nothing else.

    If it reaches a theoretical perfect score, it would just end up being a replication of the quality score itself.

source
Sort:hotnewtop