Trending publication
State Level Artificial Intelligence Regulations Materialize as Federal Roadblocks Emerge
Print PDFSeveral states at the frontline of artificial intelligence (AI) development are moving to implement frameworks to regulate the expansion of AI. However, these states appear to be on a collision course with the Trump administration, which announced on December 8, 2025, a near-finalized draft Executive Order that purports to strongly hinder state-level AI regulation. The draft Executive Order would direct the U.S. Attorney General to create an “AI Litigation Task Force”, with the directive to: (i) challenge state laws regulating AI on the grounds that such regulation interferes with interstate commerce, (ii) identify if a law restricts freedom of speech, and (iii) cut funding to states if regulations are deemed to be burdensome or restrictive.
State Approaches Emerge
One of the most influential state AI regulation bills was passed on September 29, 2025, when Governor of California Gavin Newsom signed into law Senate Bill 53 (the “Bill”), known as the Transparency in Frontier Artificial Intelligence Act, one of the first attempts by state legislators to directly regulate companies that develop AI models. The bill aims to create compliance and reporting requirements for “frontier” AI developers and models, models trained using a quantity of computing power greater than 10^26 integer or floating-point operations, which is meant to capture the largest AI companies with the most industry impact. Additionally, the bill provides AI company employees with an avenue to report potential incidents and create an enforcement mechanism if an AI company is deficient in adhering to the bill’s compliance and reporting requirements.
The California bill requires these “frontier” AI companies to do the following:
- Write, implement, comply with, and clearly and conspicuously publish on its internet website a “frontier AI framework” that applies to the large frontier developer’s frontier models that explains how the company identifies and mitigates catastrophic risks, governs the internal use of frontier models, and incorporates national and international standards of best practices, which the large frontier developer must update once per year.
- Release public transparency reports, including assessments of catastrophic risks from the frontier model conducted pursuant to the large frontier developer’s “frontier AI framework.”
- Report any “critical safety incidents” to the California Office of Emergency Services (OES) within 15 days of discovering said incident and submit to the Office of Emergency Services a summary of any assessment of catastrophic risk resulting from internal use of its frontier models every three months or pursuant to another reasonable schedule specified by the large frontier developer and communicated in writing to the Office of Emergency Services with written updates.
- Create and maintain whistleblower channels and non-retaliation policies.
- Face civil penalties of up to $1 million for failing to publish or transmit a compliant document required to be published or transmitted, failing to report a “critical safety incident,” or failing to comply with its own frontier AI framework, enforced by the California Attorney General.
Potential Impact on Businesses
Taken as a whole, the requirements imposed on AI companies by the bill function to create a window for California to see into the models being developed by the largest AI companies. With this ability, California is empowered to catch risks to the public created by artificial intelligence models. The whistleblower protections compound this capability, providing both an incentive and a safety net for AI company employees when their work is creating a risk to the general public. For AI companies, the regulations will require them to focus more resources on risk mitigation and compliance. The penalties for failing to abide by the regulations are quite punitive, so it should be expected that AI companies will upscale their compliance and reporting capabilities and teams.
Historically, California has functioned as a regulatory trendsetter, implementing legal frameworks for emerging technology that become templates for broader state adoption. Some examples include regulatory schemes regarding auto emissions, electric vehicles, and data privacy. There are indications that California’s artificial intelligence law may follow a similar path, with several U.S. states now considering legislation modeled on or inspired by California’s Transparency in Frontier Artificial Intelligence Act. Most efforts focus on transparency, risk reporting, and governance for advanced artificial intelligence systems, but states are exploring varied approaches that both mirror and diverge from California's model.
States Considering Similar Frontier AI Laws
- Illinois: A prominent bill (HB 3506c) would require developers of certain large AI models to conduct risk assessments every 90 days, publish annual third-party audits, and implement foundational model safety and security protocols. This approach is somewhat broader than California’s bill, layering on regular third-party audits and potentially more stringent requirements for transparency and public reporting.
- Massachusetts: Proposed HD 4192 would combine AI developer obligations with environmental impact reporting for large-scale AI systems—an area not covered by California’s law. The policy includes tracking impacts and publishing mitigation strategies, blending environmental oversight with AI risk management.
- New York: Lawmakers are reportedly drafting a bill focusing on transparency in high-capacity AI model development. Initial summaries indicate New York’s version may be more aggressive in enforcement and oversight, including enhanced penalty structures, as well as whistleblower protections akin to California’s bill, but potentially with higher fines or more frequent reporting obligations.
- Rhode Island: Proposed legislation (H 5224) would hold developers of advanced AI systems strictly liable for all injuries to non-users caused by their models. This is a more aggressive liability framework than California’s legislation, which focuses primarily on preemptive transparency and risk reporting, not strict liability.
- Utah and Colorado: Both states have already enacted broad AI-related laws (Colorado’s SB 205 and Utah’s SB 149) predating California’s act. However, officials in these states are reviewing amendments or supplementary bills to bring reporting, auditing, and transparency closer in line with California’s new standards—particularly regarding mandatory AI safety reports and standardized risk frameworks.
How Approaches Resemble or Differ from California
- Scope and Thresholds: California’s SB 53 targets only the largest and most advanced “frontier” AI models, setting a threshold for computing resources and company revenues. Many proposed state laws are considering similar scope restrictions, but some (like Rhode Island and Illinois) may lower the thresholds to capture a broader set of models or require more frequent compliance reporting.
- Transparency and Safety Reporting: Across all emerging bills, requirements for developers to publish transparency reports, disclose AI governance practices, and promptly report safety incidents are directly modeled on California. Differences arise in reporting frequency (annual vs. quarterly), inclusion of independent audits, and specifics of required disclosure.
- Liability and Enforcement: Rhode Island’s proposal introduces strict liability for AI-caused harms—far more severe than California’s civil penalties of up to $1 million per violation. New York’s drafts suggest stronger penalty frameworks and more robust whistleblower protections than California’s version.
- Federal Coordination: California’s law contains a “federal deference” clause allowing developers to substitute federal requirements for state ones if deemed equivalent. Other states, notably Massachusetts and Colorado, are watching to see if this harmonization strategy proves effective before adopting similar provisions.
- Unique Additions: Massachusetts includes environmental impact reporting; some states are weighing additional consumer notice requirements and Colorado is considering provisions for open access AI research infrastructure, mirroring California’s new “CalCompute” public cloud cluster.
Policy Landscape and Outlook
California’s approach is widely seen as a “blueprint” for state-level AI governance, with many legislatures referencing it for their own drafting processes. However, some states intend to go further, focusing on more stringent liability, broader model coverage, and more detailed audit protocols, while others may delay action in hopes of clearer federal standards soon. Overall, expect a growing but diverse patchwork of AI regulations across the U.S., with California’s model as the touchstone, and the likelihood for the executive branch to challenge these laws if they are found to be contrary to the objectives of this Administration.
This advisory was prepared by Portia Keady and Jack Lowy in Nutter’s Corporate Department. For more information, please contact the authors or your Nutter attorney at 617.439.2000.
This advisory is for information purposes only and should not be construed as legal advice on any specific facts or circumstances. Under the rules of the Supreme Judicial Court of Massachusetts, this material may be considered as advertising.

