On September 29, Governor Gavin Newsom signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA). This law is the first of its kind in the United States, targeting frontier Artificial Intelligence (AI) models and introducing requirements that balance safety, transparency, and accountability while encouraging innovation.
SB 53 builds directly on California’s earlier AI report, which outlined science-based recommendations for responsible AI guardrails. By enacting SB 53, California aims to create trust in emerging technologies, safeguard public safety, and maintain its leadership in shaping the future of AI.
Scope and significance of SB 53
SB 53 focuses on “frontier” AI models, the most advanced systems with wide-ranging potential impacts. While California had already taken steps with the AI Transparency Act (effective January 1, 2026), which applies to providers of generative AI systems with more than one million users, SB 53 takes a complementary but distinct approach.
Unlike the Transparency Act, which sets disclosure and provenance rules for generative AI outputs, SB 53 establishes guardrails on development practices for frontier models. SB 53 directly addresses how large developers integrate safety measures, report risks, and maintain accountability.
This focus reflects a broader pattern in California’s regulatory strategy. Instead of a single blanket law, the state is layering targeted rules that together create a comprehensive framework. SB 53 deals with the infrastructure, transparency, and operational safeguards needed for high-impact models, while the AI Transparency Act addresses how those models interact with the public.
For developers, SB 53 introduces obligations to build transparency and safety into development pipelines. For consumers and regulators, it provides new mechanisms to monitor risks and hold organizations accountable.
Key requirements under SB 53
Transparency
Large frontier developers must publish a framework on their websites describing how they have incorporated national and international standards, as well as industry best practices, into their AI systems. This public-facing disclosure aims to build trust while creating a benchmark for responsible development.
Innovation
SB 53 establishes CalCompute, a consortium within the Government Operations Agency tasked with advancing safe, ethical, and sustainable AI innovation. CalCompute will develop a framework for a public computing cluster, providing resources for research and testing in ways that smaller companies and academic institutions can access.
Safety
A new reporting mechanism requires frontier AI companies and members of the public to notify California’s Office of Emergency Services of potential critical safety incidents. This creates an early-warning channel for risks, such as unexpected model behaviors that may impact security or public health.
Accountability
SB 53 protects whistleblowers who disclose significant safety concerns about frontier AI models. It also introduces civil penalties for noncompliance, enforceable by the Attorney General’s office. By protecting internal voices and establishing penalties, SB 53 seeks to ensure accountability.
Responsiveness
Recognizing that AI evolves rapidly, SB 53 directs the California Department of Technology to recommend annual updates to the law. These updates will be based on multi-stakeholder input, advances in technology, and international standards. This mechanism aims to build flexibility into the legislation, ensuring it adapts rather than lagging behind innovation.
Preparing for compliance with SB 53
For those developing or deploying frontier AI models, SB 53 introduces new expectations. To prepare effectively, you should focus on:
- Review development practices against standards. Align internal frameworks with recognized national and international standards, documenting how these inform system design. Transparency will require publishing these frameworks, so accuracy and clarity are essential.
- Engage with CalCompute. Participation in the consortium will provide access to shared resources, including the public computing cluster. Organizations should prepare to collaborate on safe and equitable AI research.
- Implement incident reporting workflows. Developers need clear internal processes to identify, escalate, and report potential safety incidents to the Office of Emergency Services. These processes should involve both technical and compliance teams.
- Update whistleblower and compliance policies. Companies should strengthen internal channels for raising concerns and align them with the new whistleblower protections established under SB 53. Whistleblower training will be important to ensure employees know their rights and responsibilities.
- Plan for iterative compliance. Annual updates to SB 53 mean organizations must treat compliance as an ongoing process, not a one-time effort. Establishing monitoring systems for regulatory updates and international standards will be key.
AI is evolving at an unprecedented pace, and frontier models in either development or deployment stage must be ready to meet new requirements that prioritize transparency, safety, and accountability.
With OneTrust, teams can operationalize compliance for both AI and privacy regulations, automate workflows for reporting and assessments, and create a sustainable governance model for emerging technologies.
Discover how OneTrust can support your AI governance alongside privacy and compliance requirements.
Key questions about SB 53