A New White House Executive Order Tries to Box Out State AI Laws, Florida Employers Should Not Relax
- Mark Addington
- 7 minutes ago
- 3 min read

A December 11, 2025, White House executive order titled “Ensuring a National Policy Framework for Artificial Intelligence” signals a clear federal strategy: push back on the fast-growing patchwork of state AI laws, and build a path toward federal preemption.
The order does four things that matter for employers who use AI tools in hiring, scheduling, performance management, and other HR decisions.
First, it directs the Attorney General to create an AI Litigation Task Force whose job is to challenge state AI laws that the Administration views as inconsistent with the order’s policy, including challenges framed in terms of interstate commerce, federal preemption, or other legal theories.
Second, it directs the Secretary of Commerce to publish, within 90 days, an evaluation of existing state AI laws, including identifying “onerous” laws, and highlighting laws that require AI systems to alter “truthful outputs” or compel disclosures that may implicate constitutional issues.
Third, it leans on federal funding. It calls for a Commerce Department policy notice that could make states with “onerous” AI laws ineligible for certain non-deployment BEAD broadband funds, to the maximum extent allowed by federal law, and it also encourages agencies to examine whether discretionary grants can be conditioned on non-enforcement of targeted state AI laws.
Fourth, it tees up potential federal standards and preemption arguments through agency action, including an FCC proceeding on a federal reporting and disclosure standard, and an FTC policy statement on how the FTC Act’s UDAP authority might preempt state laws that require “deceptive” outputs.
It is tempting to read this as, “Great, federal law is taking over, we can stop worrying about state AI compliance.” That is a risky conclusion. An executive order can direct federal agencies, but it does not, by itself, repeal state statutes. Preemption typically comes from (1) a federal statute that expressly or impliedly preempts, or (2) successful litigation establishing that a particular state law is unconstitutional or preempted by an existing federal framework. A recent Reuters legal analysis flags significant hurdles for the funding and litigation strategy described in the order.
Just as important, even if some state AI-specific statutes get narrowed or enjoined, states still retain broad enforcement tools through general consumer protection and unfair practices laws, and state attorneys general can often pursue AI-related harms through those authorities even without an AI-specific statute.
Florida angle: state AI policy is still moving, and may collide with the federal approach
Florida is not sitting still. Governor DeSantis announced an AI legislative proposal described as an “Artificial Intelligence Bill of Rights” in early December 2025. Florida-focused reporting also reflects active 2026-session activity around an AI “Bill of Rights” concept. At the same time, Florida leadership has publicly questioned whether a president can preempt state AI legislation without Congress. For employers, this means the practical compliance question is not “federal or state.” It is “what obligations apply today, and what is likely to change during 2026.”
Employment law reality check: anti-discrimination compliance stays on the employer, even when AI is vendor-supplied
Even if federal preemption efforts gain traction against certain state AI statutes, federal employment discrimination exposure does not go away. The EEOC has repeatedly emphasized that existing EEO laws apply when employers use automated systems in employment decisions, and it has issued public-facing materials specifically about AI in employment.
The Department of Justice Civil Rights Division has also published material addressing how AI can intersect with civil rights enforcement, including disability discrimination concerns in employment contexts.
And for federal contractors, the Department of Labor has highlighted scrutiny of AI-based selection procedures through OFCCP, reinforcing that AI use does not outsource legal responsibility. Employers should challenge the assumption that a vendor's promises of compliance mean the company is safe from liability. That is not how enforcement agencies typically view it. Employers remain accountable for the selection procedures and employment decisions they adopt, even when those tools are bought rather than built.
What Florida employers should do now
If you are using AI in any HR workflow, the sensible stance is disciplined compliance, not waiting for the preemption fight to play out. Start with a real inventory of where AI or automated scoring appears in recruiting, screening, scheduling, performance management, discipline, and termination decisions. Then focus on two controls that show up repeatedly in agency guidance and litigation risk discussions: (1) bias and adverse impact testing where feasible, and (2) a documented human review path for close calls, accommodations, and overrides.
Finally, treat 2026 as a monitoring year. The executive order sets deadlines for federal agency activity that could change the compliance landscape, but none of it removes today’s obligations.




Comments