The United States has introduced a national legislative framework for artificial intelligence. Announced on March 20, 2026, the framework sets out a federal approach to AI governance across six areas, including children’s safety, intellectual property, free speech, and workforce development. It outlines how Congress could structure a unified rulebook for AI, alongside existing privacy, consumer protection, and sectoral regulations.
The proposal arrives at a time when organizations operate across a growing number of state-level laws and AI-specific requirements. The framework introduces a federal direction that brings these discussions into a single policy structure, with implications for how governance models are designed and applied.
From State Fragmentation to Federal Alignment
The framework introduces a federal approach to AI governance that interacts with existing state-level regulation.
It outlines a structure where certain state laws would be preempted in areas considered central to national policy, while preserving state authority in domains such as consumer protection, zoning, and public sector use of AI.
This creates a more defined relationship between federal and state requirements. Organizations operating across multiple states may see a shift toward a more consistent baseline for AI governance, with fewer variations across jurisdictions for core obligations.
At the same time, existing state laws and sector-specific regulations continue to shape how requirements are applied in practice. Governance models will need to account for both levels, particularly in areas where obligations overlap.
A separate legislative discussion draft released in parallel introduces additional elements such as liability frameworks for AI-related harm, reporting requirements on workforce impact, and obligations tied to system design and safety. These proposals remain under consideration and may influence how the broader framework evolves.
Federal Structure for AI Governance
The National Policy Framework establishes six policy areas that together define how AI systems should be developed and deployed across the United States.
These areas include protections for children, safeguards for communities, intellectual property considerations, free speech protections, innovation policy, and workforce development. The scope extends beyond compliance requirements into how AI systems interact with users, markets, and public infrastructure.
A consistent element across these areas is the emphasis on uniform application at the federal level, alongside provisions that address the role of state laws. The framework outlines a national standard that would preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard, preserving certain state-level powers, particularly in areas such as consumer protection, protecting children, and fraud prevention.