The White House's "National Policy Framework for Artificial Intelligence: Legislative Recommendations," released on March 20, 2026, runs just four pages. Section VII, on preemption of state AI laws, is where the document stops being a wish list and starts being a weapon.
The framework, produced pursuant to Executive Order 14365 signed by President Trump in December 2025, tells Congress to "preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones."
That sentence alone could wipe out pending legislation in California, New York, Colorado, and dozens of other states. But notice the carve-out: three categories of state law would survive federal preemption.
What survives and what doesn't
According to Gibson Dunn's analysis of the framework, three categories of state authority are exempt from preemption: traditional police powers (including laws protecting children, preventing fraud, and protecting consumers); state zoning authority over data center siting; and rules governing a state's own use of AI through procurement or public services.
Everything else is on the chopping block. The framework recommends barring states from regulating AI development entirely, describing it as "inherently interstate and tied to foreign policy and national security." States would also be blocked from "penalizing AI developers for third-party misuse of their models," per the Gibson Dunn analysis.
That last provision is worth reading twice. It would effectively immunize model developers from liability when someone uses their tool to cause harm, as long as the developer didn't intend the misuse. California's SB 53 and New York's RAISE Act, both of which would impose oversight on frontier model developers, run directly into this wall.
The seven pillars, briefly
The framework organizes its recommendations across seven areas. Sullivan & Cromwell's memo breaks them down:
Child safety gets the most specific language. The framework calls for age-assurance requirements, parental controls for privacy and screen time, and features to reduce risks of sexual exploitation and self-harm. Existing child privacy laws would apply to AI systems. States retain authority to enforce generally applicable child-protection laws.



