The White House's "National Policy Framework for Artificial Intelligence: Legislative Recommendations," released on March 20, 2026, runs just four pages. Section VII, on preemption of state AI laws, is where the document stops being a wish list and starts being a weapon.
The framework, produced pursuant to Executive Order 14365 signed by President Trump in December 2025, tells Congress to "preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones."
That sentence alone could wipe out pending legislation in California, New York, Colorado, and dozens of other states. But notice the carve-out: three categories of state law would survive federal preemption.
What survives and what doesn't
According to Gibson Dunn's analysis of the framework, three categories of state authority are exempt from preemption: traditional police powers (including laws protecting children, preventing fraud, and protecting consumers); state zoning authority over data center siting; and rules governing a state's own use of AI through procurement or public services.
Everything else is on the chopping block. The framework recommends barring states from regulating AI development entirely, describing it as "inherently interstate and tied to foreign policy and national security." States would also be blocked from "penalizing AI developers for third-party misuse of their models," per the Gibson Dunn analysis.
That last provision is worth reading twice. It would effectively immunize model developers from liability when someone uses their tool to cause harm, as long as the developer didn't intend the misuse. California's SB 53 and New York's RAISE Act, both of which would impose oversight on frontier model developers, run directly into this wall.
The seven pillars, briefly
The framework organizes its recommendations across seven areas. Sullivan & Cromwell's memo breaks them down:
Child safety gets the most specific language. The framework calls for age-assurance requirements, parental controls for privacy and screen time, and features to reduce risks of sexual exploitation and self-harm. Existing child privacy laws would apply to AI systems. States retain authority to enforce generally applicable child-protection laws.
The communities section includes a provision tied to the White House's Ratepayer Protection Pledge: Congress should ensure residential consumers don't bear increased electricity costs from new data center construction. The framework also calls for streamlined federal permitting so data centers can generate power on-site.
On creators, things get politically delicate. The Administration states its belief that training AI models on copyrighted material does not violate copyright law but, according to Sullivan & Cromwell, "supports deferring to courts to resolve this issue." The framework also recommends protections against unauthorized AI-generated digital replicas of a person's voice, likeness, or other attributes, while supporting voluntary licensing frameworks.
The censorship section calls on Congress to prohibit government coercion of platforms to moderate content based on "partisan or ideological viewpoints" and to create a cause of action for individuals harmed by federal censorship efforts.
On competitiveness, the framework explicitly opposes creating any new federal AI regulatory body. Instead, it endorses sector-specific oversight through existing agencies, industry-led standards, and regulatory sandboxes. This is the "light-touch" core of the document.
Workforce and education urges non-regulatory means to integrate AI training into existing programs.
And Section VII, preemption, is the enforcement mechanism for all of it.
Who wants what and why
Michael Kratsios, director of the White House Office of Science and Technology Policy, told The Daily Signal that "one of the key provisions" making the framework work is "focusing on the bipartisan consensus around protecting America's children," as Reuters reported. That framing is deliberate. Child safety is the entry point for a document whose real center of gravity is preemption and deregulation.
The AI industry has pushed hard against state-level regulation. As CNBC reported, industry leaders have argued that a "patchwork" of state laws would "hobble innovation and give global competitors like China a major advantage." This framework delivers almost exactly what they asked for.
House Republican leaders, including Speaker Mike Johnson, described the framework as a "critical step" that "gives Congress a roadmap for legislation" and committed to "working across the aisle to enact a national framework," according to Reuters. On the other side, House Democrats introduced the GUARDRAILS Act to repeal the December executive order that produced this framework in the first place, per Gibson Dunn.
Two days before the framework dropped, Senator Marsha Blackburn introduced a discussion draft of the "TRUMP AMERICA AI Act," which mirrors the framework's approach, according to Gibson Dunn.
What this actually means for Congress
The honest assessment: this probably doesn't become law this year. Gibson Dunn flags "significant headwinds, including a narrow window before midterm elections, bipartisan opposition to state preemption, differing views between the House and Senate, and the sheer scope and complexity of such an endeavor."
Republicans hold thin majorities. Trump has told GOP lawmakers to prioritize his voter-ID bill above everything else ahead of November, according to CNBC. The Senate has been consumed by the SAVE America Act debate. There is not an obvious legislative vehicle or calendar slot for a comprehensive AI bill.
But that misses the point. The framework doesn't need to pass Congress to change behavior. It signals to federal agencies how the Administration expects them to act. It gives the DOJ AI Litigation Task Force ammunition to challenge state laws. And it puts every state legislature on notice: pass your AI bill now, and you may be building on sand.
What builders should do now
Don't wait for Congress. The framework is a legislative recommendation with no independent legal force, as Gibson Dunn and Sullivan & Cromwell both emphasize. State AI laws remain fully enforceable. Colorado's AI Act takes effect in June 2026. California and New York are still legislating.
If you build AI products, you need to comply with state law today while tracking whether federal preemption actually materializes. Watch the DOJ AI Litigation Task Force for enforcement signals. And read the preemption carve-outs carefully, because the gap between "traditional police powers" and "regulating AI development" is where the legal fights will happen.
The framework tells you where the Administration wants to go. It doesn't tell you when it will get there. Plan for both timelines.
Paul Menon covers AI policy for The Daily Vibe.



