top of page

AI Is Already Changing Employment Litigation

  • Writer: Mark Addington
    Mark Addington
  • 26 minutes ago
  • 4 min read

Artificial intelligence is already affecting employment litigation. It is affecting how employers make decisions, how some claims are prepared and filed, and how courts are responding to machine-assisted errors and evidence. The current data does not yet establish that generative AI, by itself, caused a nationwide increase in employment lawsuits. But the legal impact of AI is no longer theoretical. It is already showing up in agency enforcement, court rulings, and litigation strategy.


That matters because employers are using AI in more workplace functions at a time when the employment enforcement environment is already active. The EEOC reported 88,531 new discrimination charges in fiscal year 2024, more than a 9% increase over fiscal year 2023. At the same time, Stanford’s 2025 AI Index reported that 78% of organizations said they were using AI in 2024, up from 55% the year before. When AI enters recruiting, screening, evaluations, discipline, scheduling, or internal investigations, it can become part of the factual record in a charge or lawsuit.


There is already a concrete employment example. In 2023, the EEOC announced that iTutorGroup would pay $365,000 to settle allegations that its software automatically rejected older applicants based on age. According to the agency, the software screened out women age 55 or older and men age 60 or older, and the EEOC said more than 200 qualified applicants were affected. That case is significant because it shows how technology can become the mechanism of the alleged discrimination itself.


The Workday litigation shows the issue moving deeper into mainstream employment law. In March 2026, the Northern District of California again allowed significant portions of the case to proceed. The order states that the plaintiffs were already proceeding on disparate-impact claims based on race, disability, and age, while the court granted in part and denied in part Workday’s latest motion to dismiss and strike. The lesson for employers is straightforward: courts are willing to take allegations of algorithmic screening tools producing discriminatory outcomes seriously.


AI is also affecting litigation from the filing side, especially where self-represented litigants are involved. The clearest recent public metric comes from the federal courts of appeals. Judicial Business reports show that pro se litigants accounted for 46% of new appellate filings in fiscal year 2023, 48% in fiscal year 2024, and 50% in fiscal year 2025. The 2025 report also states that pro se appellate filings grew 9% to 20,878. Those figures are not limited to employment cases, and they do not prove an AI effect. They do, however, confirm that self-represented litigation remains a major part of the federal system.


Employment-specific analytics point in a similar direction. Lex Machina reports that more than 16% of employment lawsuits filed in 2025 were filed pro se, up from under 10% in 2021. The same report says that from 2023 through 2025, pro se employment plaintiffs lost on the merits at a ratio exceeding 40 to 1. That suggests self-represented employment litigation may be becoming more common even while remaining overwhelmingly unsuccessful on the merits. From an employer’s perspective, that means AI may be increasing litigation friction and defense costs more clearly than it is improving plaintiff outcomes.


Florida courts are taking notice too. Miami-Dade’s Eleventh Judicial Circuit now requires attorneys and self-represented litigants who use generative AI in preparing a pleading, motion, memorandum, response, proposed order, or other court record to disclose that use on the face of the filing and certify that all factual assertions, legal authority, and citations were independently reviewed and verified. Broward’s Seventeenth Judicial Circuit has adopted a similar disclosure-and-certification requirement. Florida’s legal system is also addressing AI through ethics guidance: The Florida Bar’s Ethics Opinion 24-1 discusses confidentiality and competence issues when lawyers use generative AI. In federal court, Florida judges have already imposed or threatened sanctions over fabricated citations and other AI-related filing problems, including in the Middle District of Florida and the Southern District of Florida. For employers and counsel in Florida, the message is clear: courts expect human verification, and AI does not lessen the duty of accuracy or candor.


The evidence issues are growing too. The Advisory Committee on Evidence Rules reported in late 2025 that it had spent years studying whether the current rules adequately address evidence created by artificial intelligence. The Committee identified machine-generated evidence as a major issue. In employment litigation, this may involve applicant scores, automated rankings, productivity flags, or AI-generated summaries used in workplace investigations. Once those outputs become part of a disputed employment decision, the case is no longer just about what someone wrote in a pleading. It becomes a dispute over whether the underlying AI-generated output is reliable.


Employers should respond now, not later. Businesses using AI in employment functions should understand what the tool does, what data it uses, who can explain it, and how human review actually works in practice. They should also assume that AI-assisted decision-making may become discoverable and challengeable. Waiting for perfect statistics is not a sound legal strategy. By the time the numbers become clearer, the charge, the discovery requests, and the motion practice may already be underway before the company.

 
 
 
bottom of page