Group co-led by Fei-Fei Li suggests that AI safety laws should anticipate future risks

In a brand new report, a California-based coverage group co-led by Fei-Fei Li, an AI pioneer, means that lawmakers ought to contemplate AI dangers that “haven’t but been noticed on the planet” when crafting AI regulatory insurance policies.
The 41-page interim report launched on Tuesday comes from the Joint California Coverage Working Group on Frontier AI Fashions, an effort organized by Governor Gavin Newsom following his veto of California’s controversial AI safety bill, SB 1047. Whereas Newsom discovered that SB 1047 missed the mark, he acknowledged final yr the necessity for a extra intensive evaluation of AI dangers to tell legislators.
Within the report, Li, together with co-authors UC Berkeley School of Computing Dean Jennifer Chayes and Carnegie Endowment for Worldwide Peace President Mariano-Florentino Cuéllar, argue in favor of legal guidelines that will improve transparency into what frontier AI labs resembling OpenAI are constructing. Business stakeholders from throughout the ideological spectrum reviewed the report earlier than its publication, together with staunch AI security advocates like Turing Award winner Yoshua Benjio in addition to those that argued towards SB 1047, resembling Databricks Co-Founder Ion Stoica.
Based on the report, the novel dangers posed by AI techniques might necessitate legal guidelines that will drive AI mannequin builders to publicly report their security exams, knowledge acquisition practices, and safety measures. The report additionally advocates for elevated requirements round third-party evaluations of those metrics and company insurance policies, along with expanded whistleblower protections for AI firm staff and contractors.
Li et al. write there’s an “inconclusive degree of proof” for AI’s potential to assist perform cyberattacks, create organic weapons, or result in different “excessive” threats. Additionally they argue, nonetheless, that AI coverage mustn’t solely handle present dangers, however anticipate future penalties which may happen with out ample safeguards.
“For instance, we don’t want to look at a nuclear weapon [exploding] to foretell reliably that it may and would trigger intensive hurt,” the report states. “If those that speculate about essentially the most excessive dangers are proper — and we’re unsure if they are going to be — then the stakes and prices for inaction on frontier AI at this present second are extraordinarily excessive.”
The report recommends a two-pronged technique to spice up AI mannequin growth transparency: belief however confirm. AI mannequin builders and their staff ought to be offered avenues to report on areas of public concern, the report says, resembling inside security testing, whereas additionally being required to submit testing claims for third-party verification.
Whereas the report, the ultimate model of which is due out in June 2025, endorses no particular laws, it’s been effectively obtained by specialists on each side of the AI policymaking debate.
Dean Ball, an AI-focused analysis fellow at George Mason College who was vital of SB 1047, mentioned in a put up on X that the report was a promising step for California’s AI security regulation. It’s additionally a win for AI security advocates, in keeping with California State Senator Scott Wiener, who launched SB 1047 final yr. Wiener mentioned in a press launch that the report builds on “pressing conversations round AI governance we started within the legislature [in 2024].”
The report seems to align with a number of parts of SB 1047 and Wiener’s follow-up invoice, SB 53, resembling requiring AI mannequin builders to report the outcomes of security exams. Taking a broader view, it appears to be a much-needed win for AI security of us, whose agenda has lost ground in the last year.