Developers have to quickly and fully disable any AI model considered unsafe
The bill, which has been a hot topic of discussion from Silicon Valley to Washington, is set to impose some key rules on AI companies in California. For starters, before diving into training their advanced AI models, companies will need to ensure they can quickly and completely shut down the system if things go awry. They will also have to protect their models from unsafe changes after training and keep a closer eye on testing to figure out if the model could pose any serious risks or cause significant harm.
SB 1047 — our AI safety bill — just passed off the Assembly floor. I’m proud of the diverse coalition behind this bill — a coalition that deeply believes in both innovation & safety.
AI has so much promise to make the world a better place. It’s exciting.
Thank you, colleagues.
— Senator Scott Wiener (@Scott_Wiener) August 28, 2024
Critics of SB 1047, including OpenAI, the company behind ChatGPT, have raised concerns that the law is too fixated on catastrophic risks and might unintentionally hurt small, open-source AI developers. In response to this pushback, the bill was revised to swap out potential criminal penalties for civil ones. It also tightened the enforcement powers of California’s attorney general and modified the criteria for joining a new “Board of Frontier Models” established by the legislation.
Governor Gavin Newsom has until the end of September to make a call on whether to approve or veto the bill.
As AI technology continues to evolve at lightning speed, I do believe regulations are the key to keeping users and our data safe. Recently, big tech companies like Apple, Amazon, Google, Meta, and OpenAI came together to adopt a set of AI safety guidelines laid out by the Biden administration. These guidelines focus on commitments to test AI systems’ behaviors, ensuring they don’t show bias or pose security risks.The European Union is also working towards creating clearer rules and guidelines around AI. Its main goal is to protect user data and look into how tech companies use that data to train their AI models. However, the CEOs of Meta and Spotify recently expressed worries about the EU’s regulatory approach, suggesting that Europe might risk falling behind because of its complicated regulations.