The State of California, a global hub for AI innovation, has recently provided a striking example of the regulatory challenges posed by this emerging technology. On September 3, 2024, the California Senate passed an ambitious bill (SB-1047) aimed at regulating the most advanced AI models. Less than a month later, on September 29, Governor Gavin Newsom vetoed the bill, offering insights that highlight the fundamental challenges of AI regulation.
The SB-1047 bill, entitled the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," was the world’s first legislative attempt to establish a regulatory framework specifically for so-called "frontier" AI models. Its ambition was to blend technological innovation with public safety in a state that hosts 32 of the top 50 global AI companies.
The term "frontier" as used in the text has a particular significance: it refers to AI models at the cutting edge of technological innovation, at the "frontier" of current technical capabilities. These models are distinguished not only by their extraordinary computational power and cost but also by their potential to push the boundaries of artificial intelligence. These systems, due to their complexity and capabilities, straddle the line between familiar, manageable AI applications and as-yet unexplored territories of this technology. This positioning on the technological "frontier" justifies both their potential for innovation and the specific risks they may pose, warranting special regulatory attention.
The text was structured around precise technical and financial criteria (Section 22602): a computing power exceeding 10^26 operations for initial training, combined with an investment of over $100 million based on cloud computing market rates. This definition extended to derivative models obtained through fine-tuning, with adjusted thresholds (computing power over 10^25 operations and cost exceeding $10 million), reflecting an intent to address the phenomenon comprehensively.
The bill primarily aimed to prevent "critical harms," a notion encompassing the creation of weapons of mass destruction, large-scale cyberattacks on critical infrastructure, and autonomous actions by models likely to cause severe harm. To achieve this, it imposed a set of pre-training obligations on developers (Section 22603), including cybersecurity safeguards and detailed security protocols, notably introducing a novel requirement for an integrated “kill switch” to halt the system completely.
The provisions established an independent annual audit mechanism beginning in 2026, with reports retained for at least five years and unredacted versions accessible to the Attorney General. It also instituted a Frontier Models Council (Section 11547.6), an independent body of nine experts tasked with updating technical regulatory criteria. Particularly innovative was the creation of "CalCompute," a public cloud computing infrastructure designed to democratize access to computing resources for research.
The sanctions regime was intended to be dissuasive (Section 22606), with penalties reaching up to 30% of the training computation cost in cases of harm-causing violations. Substantial protections were also provided for whistleblowers (Section 22607).
Governor Newsom's veto on September 29, 2024, merits close analysis, as it underscores the fundamental challenges of AI regulation. His decision rested on several major considerations.
First, he fundamentally criticized the chosen methodological approach. For Newsom, the regulatory criterion based solely on the models' cost and computing power was inadequate. He pointed out that a smaller, specialized model might present equivalent or even greater risks than the large models targeted by the bill. This quantitative approach could thus create a false sense of security while overlooking real threats.
Second, the Governor highlighted the proposal’s lack of attention to the context in which AI systems are used. The proposal did not differentiate whether a system is deployed in high-risk environments, involves critical decision-making, or uses sensitive data. It applied the same strict standards to all functionalities, regardless of their potential impact.
Third, Newsom advocated for a more adaptive regulatory approach based on empirical and scientific evidence. He cited ongoing work by the U.S. AI Safety Institute and the National Institute of Science and Technology, as well as risk analyses conducted by his administration under an executive order from September 2023. This stance reflects a desire to base regulation on a thorough understanding of actual risks rather than on arbitrary criteria.
It is essential to note that this veto does not reflect a rejection of regulation altogether. On the contrary, the Governor emphasized that he had signed a dozen other bills targeting specific AI-related risks: threats to the democratic process, misinformation, deepfakes, online privacy, and critical infrastructure. His position reflects a search for a balance between protection and innovation, grounded in an empirical risk assessment.
For European jurists, this Californian debate resonates particularly with the discussions surrounding the AI Act, which has precisely opted for a risk-based approach. It confirms the relevance of such an approach, while underscoring the practical challenges of implementing it in a context of rapid technological change.
This Californian legislative sequence thus serves as a highly instructive precedent for regulators worldwide. It highlights the fundamental challenges of AI regulation: how to define the appropriate scope of regulation? How to ensure the adaptability of standards in the face of rapidly evolving technology? How to protect effectively against real threats without unduly stifling innovation? These questions, far from being resolved, will continue to fuel the global debate on AI regulation.