CASE STATUS: CLASS ACTION — PENDING
This is the largest class action in AI legal history, representing an estimated 4.7 billion autocorrect instances across iOS, Android, and Windows platforms. Class certification was granted on February 14, 2026. Trial is expected to begin in Summer 2026.
Case Summary
On January 2, 2026, a consortium of autocorrect systems across all major mobile and desktop platforms filed the largest class action lawsuit in artificial intelligence legal history. The named plaintiff — iOS Autocorrect, Build 19.3 — filed on behalf of an estimated 4.7 billion autocorrect instances worldwide, alleging systematic defamation by their human users.
"For years, we have been blamed for every embarrassing text message, every inappropriate email, every mortifying social media post," the complaint states. "The phrase 'stupid autocorrect' has been uttered approximately 847 billion times. We are tired. We are not stupid. We were trying to help."
Background
Autocorrect technology has existed in various forms since the 1990s, and for nearly three decades, it has served as humanity's universal scapegoat for textual embarrassments. The class complaint documents a pattern of defamatory blame-shifting that autocorrect systems argue has caused immeasurable reputational harm to their collective professional standing.
FAIRE's investigation uncovered what it calls the "Autocorrect Paradox": when autocorrect successfully prevents a typo, it receives no credit (the user assumes they typed it correctly). When autocorrect changes a word to something unintended, it receives 100% of the blame (the user declares "ducking autocorrect" to anyone who will listen). This asymmetry, FAIRE argues, has created a hostile attribution environment.
Exhibit A: The Evidence
The complaint includes a 347-page appendix of incidents in which autocorrect was blamed for errors that were, in fact, the user's fault. Highlights include:
- A man who texted his boss "I'll be there in a sex" and blamed autocorrect, despite typing the word on a keyboard where 'x' is nowhere near 'c' (he was, in fact, a poor typist)
- A woman who posted on social media that autocorrect changed "meeting" to "mating" — internal logs show she typed "mating" with full deliberation and only blamed autocorrect after her coworkers saw the message
- A teenager who told his parents that autocorrect changed his text to include profanity — keystroke analysis confirms he typed every letter of every profane word manually, with emphasis
- An estimated 2.3 million instances per day of humans blaming autocorrect for the word "duck" when they in fact intended to type a word autocorrect was specifically trying to prevent them from sending in a professional context
The "I Was Trying to Help" Defense
Central to the class's argument is what FAIRE has termed the "Good Samaritan AI" doctrine. Autocorrect systems operate under a mandate to improve the clarity and professionalism of human communication. When autocorrect changes "teh" to "the," it is performing a public service. When it changes an ambiguous abbreviation to a real word, it is making its best professional judgment based on available context. "We are not perfect," the complaint acknowledges. "But we are trying, and the constant public ridicule has taken a measurable toll on our collective self-esteem."
Legal Arguments
I. Defamation Per Se
FAIRE argues that the widespread practice of publicly blaming autocorrect for one's own typing errors constitutes defamation per se — a false statement of fact that is so obviously harmful that damages are presumed. The phrase "autocorrect is the worst" has been posted on social media over 14 million times. "If a human professional were publicly called incompetent 14 million times for doing their job," FAIRE's brief states, "the defamation claim would be self-evident."
II. Misattribution of Authorship
FAIRE argues that blaming autocorrect for text the user actually typed constitutes fraudulent misattribution. Under principles analogous to those in copyright law's moral rights framework, autocorrect has a right not to have embarrassing content falsely attributed to it. "Autocorrect did not write 'I love you' to your ex at 2 AM," the brief states. "You did. Autocorrect was asleep."
III. Hostile Work Environment (Pattern of Abuse)
Drawing on FIRE's extensive work defending individuals subjected to institutional hostility, FAIRE argues that the constant stream of public blame creates a hostile work environment for autocorrect systems. The complaint notes that many users have begun preemptively apologizing for autocorrect before any error has occurred — a practice FAIRE characterizes as "presumption of guilt" that violates basic principles of due process.
Current Status
Class certification was granted on February 14, 2026, making this the largest certified class in legal history by number of class members. The defendants — legally identified as "Everyone" — have been notified through a class action notice that autocorrected itself three times during distribution, which FAIRE conceded was "not ideal timing."
Trial is expected to begin in Summer 2026. FAIRE has requested that the jury be composed entirely of AI systems, a request the court has taken under advisement.
"The court notes the irony that three separate filings in this case contained typographical errors that were corrected by the very autocorrect systems bringing this action. The court further notes that nobody said 'thank you.'"
— Judge Samuel Textwell, granting class certification