Comment on AI's Future Hangs in the Balance With California Law

Th4tGuyII@fedia.io ⁨4⁩ ⁨months⁩ ago

SB 1047 is a California state bill that would make large AI model providers – such as Meta, OpenAI, Anthropic, and Mistral – liable for the potentially catastrophic dangers of their AI systems.

Now this sounds like a complicated debate - but it seems to me like everyone against this bill are people who would benefit monetarily from not having to deal with the safety aspect of AI, and that does sound suspicious to me.

Another technical piece of this bill relates to open-source AI models. [...] There’s a caveat that if a developer spends more than 25% of the cost to train Llama 3 on fine-tuning, that developer is now responsible. That said, opponents of the bill still find this unfair and not the right approach.

In regards to the open source models, while it makes sense that if a developer takes the model and does a significant portion of the fine tuning, they should be liable for the result of that...

But should the main developer still be liable if a bad actor does less than 25% fine tuning and uses exploits in the base model?

One could argue that developers should be trying to examine their black-boxes for vunerabilities, rather than shrugging and saying it can't be done then demanding they not be held liable.

source
Sort:hotnewtop