top of page

Trump’s New AI Executive Order Targets State AI Laws, What Florida Employers Should Know and Do Now

  • Writer: Mark Addington
    Mark Addington
  • Dec 12
  • 5 min read
ree

On December 11, 2025, the White House issued an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence.” Its stated goal is to preserve U.S. “global AI dominance” through a “minimally burdensome” national framework, and to push back on what it calls “onerous and excessive” state AI laws.


If you operate in Florida, it is tempting to read this as “state AI regulation is over.” That is not what this order does. Practically, it sets up a federal strategy to identify, pressure, and litigate against certain state AI laws, while leaving employers with the same immediate compliance exposures they already had, hiring bias claims, wage-and-hour disputes, consumer protection risk, and privacy and data governance problems.


What the order actually does

1) It creates a DOJ “AI Litigation Task Force” focused on challenging state AI laws.

Within 30 days of the order, the Attorney General must create an AI Litigation Task Force whose “sole responsibility” is to challenge state AI laws that the administration deems inconsistent with the order’s national policy, including challenges based on interstate commerce, federal preemption by existing federal regulations, or any other theory DOJ chooses. That 30-day clock makes January 10, 2026, the first practical milestone employers should track.


2) Commerce must publish an “onerous laws” evaluation that can become the hit list.

Within 90 days, the Secretary of Commerce must publish an evaluation identifying “onerous” state AI laws that conflict with the order’s policy, including laws the administration believes require AI models to “alter their truthful outputs” or that might compel disclosures in a way that violates the First Amendment or other constitutional provisions. That 90-day deadline falls on March 11, 2026.


3) It ties certain state funding to state AI policy, starting with BEAD non-deployment funds.

Within 90 days, Commerce must issue a policy notice describing conditions for remaining funding under the Broadband Equity, Access, and Deployment (BEAD) program (citing 47 U.S.C. § 1702(e)–(f)), and it must provide that states identified as having “onerous” AI laws are ineligible for non-deployment funds “to the maximum extent allowed by Federal law.”


4) It directs agencies to explore conditioning discretionary grants on states not enforcing certain AI laws.

The order directs federal agencies to assess whether discretionary grants can be conditioned on states not enacting conflicting AI laws, or, if they already have such laws, on entering a binding agreement not to enforce them during the grant period.


5) It tees up FCC and FTC action aimed at preemption and “truthful outputs.”

It instructs the FCC Chair to initiate a proceeding on a federal reporting and disclosure standard for AI models that would preempt conflicting state laws, triggered within 90 days after Commerce publishes its identification. It also directs the FTC Chair to issue a policy statement within 90 days addressing when state laws requiring “alterations to the truthful outputs” of AI models are preempted by the FTC Act’s prohibition on deceptive acts or practices (citing 15 U.S.C. § 45).


6) It calls for a legislative proposal, with some carve-outs.

The order directs the administration to prepare a legislative recommendation establishing a uniform federal framework that would preempt conflicting state AI laws, but it says that the recommendation should not propose preempting otherwise lawful state AI laws relating to child safety protections, AI computing and data center infrastructure (with a permitting-reform caveat), and state government procurement and use of AI.


What the order does not do, and the assumption you should challenge

This is the part business owners need to read twice. An executive order can direct federal agencies, set federal enforcement priorities, and signal litigation strategies. It does not automatically void state statutes simply because the President dislikes them. Even the order itself includes a standard provision stating it does not create enforceable rights for private parties.


So, if you operate across state lines, you should assume that state AI laws remain in force unless and until they are repealed, enjoined by a court, or preempted by valid federal statutes or regulations.


There is also a policy premise embedded in the order that deserves skepticism. The order frames some state AI laws as forcing “truthful output” changes, and it suggests that certain discrimination-focused requirements embed “ideological bias.” A reasonable counterpoint is that many state initiatives are aimed at transparency and civil rights-related concerns, and they exist largely because Congress has not enacted comprehensive AI legislation. That tension is exactly why this order is already drawing constitutional and federalism criticism, and why litigation is likely.


I do not know how courts will ultimately resolve these challenges, especially regarding funding conditions and preemption arguments, because that will depend on specific state statutes, the precise federal legal theory asserted, and how courts apply the doctrine to the facts.


Why Florida employers should care anyway

Florida is not starting from zero on tech regulation. Florida’s Digital Bill of Rights (SB 262) is a major privacy and data governance statute that took effect July 1, 2024, and it expressly preempts certain consumer personal data regulation “to the state.”


That Florida statute is not the same thing as a broad AI governance act, but it matters because most workplace AI tools are, at bottom, data systems: they ingest applicant data, employee performance data, scheduling data, communications, and sometimes biometric or sensitive information. Privacy, retention, access control, and vendor oversight are still the foundation.


More importantly, your largest risk from AI in the workplace is often not “an AI law,” it is the way AI changes decision-making: If AI tools reshape job duties, supervision, or discretion, you can create wage-and-hour risk. If AI influences hiring, promotion, discipline, or termination, you can create discrimination risk. If AI tools generate outward-facing statements to customers or applicants, you can create consumer protection and misrepresentation risk. This executive order does not remove any of that.


State law is still real for multi-state employers: Colorado is an easy example

Colorado’s SB 24-205 is one of the laws that national reporting is already pointing to as a likely target. It imposes “reasonable care” duties on developers and deployers of “high-risk” AI systems to protect consumers from algorithmic discrimination, including requirements such as risk management policies, impact assessments, notices to consumers when AI is a substantial factor in consequential decisions, and appeal processes with human review when feasible.


Whether the federal government challenges that law is a separate question. The immediate business reality is simpler: if you have Colorado applicants, employees, or consumers, you should not plan compliance around a future lawsuit that may or may not succeed.


A practical path forward for the next 90 days

If you want a compliance posture that survives regulatory whiplash, focus on what you can control. First, map where AI touches employment decisions and communications. Include vendor tools, not just internal projects. Second, document the human decision points: who reviews outputs, what factors are considered, and what is logged. Third, validate for bias and error rates in the specific context you use the tool, then re-test after material changes. Fourth, tighten vendor contracts: audit support, documentation access, incident reporting, and change-of-law terms are not “nice to have” if enforcement swings between state and federal theories. Fifth, calendar the order’s internal deadlines so you are not surprised by new federal actions or public lists of targeted state laws: January 10, 2026, and March 11, 2026 are the two dates baked into the executive order.

Comments


bottom of page