Summary
-
The Biden-Harris administration has secured voluntary commitments from eight tech companies to develop safe and trustworthy generative AI models.
-
The companies include Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability AI.
-
The commitments only cover future generative AI models, which are models that can create new text, images, or other data.
-
The companies have agreed to submit their software to internal and external audits, where independent experts can attack the models to see how they can be misused.
-
They have also agreed to safeguard their intellectual property, prevent the tech from falling into the wrong hands, and give users a way to easily report vulnerabilities or bugs.
-
The companies have also agreed to publicly report their technology’s capabilities and limits, including fairness and biases, and define inappropriate use cases that are prohibited.
-
Finally, the companies have agreed to focus on research to investigate societal and civil risks AI might pose, such as discriminatory decision-making or weaknesses in data privacy.
The article also mentions that the White House is developing an Executive Order and will continue to pursue bipartisan legislation “to help America lead the way in responsible AI development.”
It is important to note that these commitments are voluntary, and there is no guarantee that the companies will follow through on them. The White House’s Executive Order and bipartisan legislation would provide stronger safeguards for generative AI.
Additional Details
-
The White House is most concerned about AI generating information that could help people make biochemical weapons or exploit cybersecurity flaws, and whether the software can be hooked up to automatically control physical systems or self-replicate.
-
The voluntary commitments from these tech companies are a good start, but they are not enough.
-
The White House’s Executive Order and bipartisan legislation would provide stronger safeguards for generative AI.
Comment
-
Haha, let them self-regulate, just like the financial industries regulate themselves, or became the heads of the agencies that regulate these things. See how that will turn out.
-
Responsible AI would always include, our AI models would kill you faster than you can blink and hack your systems faster than you can move your fingers.
TacoButtPlug@sh.itjust.works 1 year ago
“We promise to not let it fall into the wrong hands.” -The company that’s been massively hacked a few times
Cyberwitch_7493@lemmy.dbzer0.com 1 year ago
Which one lol?