The Company Fired You. The AI Didn’t.
The pitch deck lands like a promise: a row of charts picking up and to the right, a line about “AI-enabled efficiency,” and the quiet implication that the firm has finally found the courage to do what common sense says is overdue—replace expensive, inconsistent humans. You’ve seen this movie. Yesterday, Cory Doctorow gave it a blunt spoiler in The Guardian: “AI can’t do your job.” The twist is nastier. Even when the model can’t do the work, the spreadsheet can still decide you’re gone. That is the contradiction powering the current moment—and the employment crisis lurking inside it.
Valuations Need a Body Count
Doctorow knits together a thread that usually gets cut when we talk about AI and jobs. The story isn’t just about capability; it’s about capital. Growth-hungry tech companies have to justify their multiples with a credible tale of labor subtraction. “We will replace people with software” isn’t an outcome; it is a narrative instrument designed to extend the runway and refinance the dream. We’ve heard versions of it before, attached to other acronyms and other future markets. What’s novel here is the scale of the buildout—colossal data centers, power contracts measured in decades, a procurement spree for chips—and the way those balance sheets demand a workforce-sized explanation, right now, long before the models can reliably shoulder the jobs they’re said to replace.
In practice, the gap between promise and capacity gets bridged by people you don’t see. An army of labelers, red teamers, and content moderators clean up the outputs. Inside companies, the humans who weren’t fired become the human middleware. The work is still being done; it’s just being done under a new theory of value, one that pushes accountability downward while praise for “AI-enabled scale” migrates upward.
The Reverse Centaur and the Accountability Sink
If the 2010s gave us the centaur—humans paired with software to amplify strengths—this cycle is minting its mirror image. In Doctorow’s telling, the “reverse centaur” is a worker who adapts to the machine’s needs, not the other way around. Oversight becomes the job. Your screen fills with model outputs you must approve at speed because throughput is the KPI. The twist of the knife is the liability. The system fails, but the paper trail says you signed off. The machine is lauded for its productivity, and the human becomes the accountability sink.
Consider radiology, the canonical domain of computer vision ambition. Safety dictates human review, and that’s good. But when the workflow is built around the model as primary and the clinician as certifier of last resort, oversight mutates into a legal hot potato. The clinician’s time gets fragmented, dictated by the cadence of a tool that isn’t safe to run alone. The oversight isn’t augmentation; it’s risk absorption—without matching authority to throttle or halt the system when its error profile drifts. Multiply this pattern across claims processing, content moderation, compliance checks, even newsroom copy desks, and you get a macro story about human expertise being converted into a heat shield for brittle automation.
Copyright: A Quiet, Functional Guardrail
There is, however, a guardrail that has teeth right now, not in some imagined future standard. The U.S. Copyright Office’s human-authorship rule has become an unintentional labor policy. Doctorow puts it plainly: “The only way these companies can get a copyright is to pay humans to do creative work.” For studios, publishers, and brands, that is not a philosophical debate; it is how you secure the asset. If you want to own the movie, the novel, the ad campaign, you need a human author. This is why so many “AI-only” content shops quietly hire editors and writers to humanize outputs and sign the dotted line.
That logic points to a larger strategy confusion. Expanding copyright to cover training data might sound like creator-friendly reform, but as Doctorow argues, it would do more to fortify data-rich incumbents than to put money in working artists’ pockets. The alternative is older and more direct: sectoral bargaining that sets floor rules for how AI can be deployed across an industry. The Writers Guild showed that contracts can mandate human credit, consent for training, and clear boundaries for AI’s role. That is not an anti-tech stance; it’s a recognition that ownership and accountability must rhyme, and that individual workers can’t negotiate with a valuation story on their own.
When the Bubble Deflates, What Remains
Every bubble leaves behind infrastructure and a few durable use cases. The bet here is similar. Much of the overbuilt capacity won’t pencil out. Many companies born to ride the hype will get folded or vanish. But useful tools will remain—especially smaller, cheaper, open models that run closer to where the work happens. Those tools won’t replace a newsroom or a clinic or a law practice. They will slot into workflows that are actually designed around human decision-making, with audit logs, error budgets, and clear off-ramps when the model is uncertain.
That is the centaur worth defending: the human on top, the tool under the saddle. It’s slower to sell on an earnings call because it doesn’t promise headcount annihilation. But customers can feel when quality improves instead of eroding, when a support interaction ends with clarity instead of a hallucinated policy, when a diagnosis is accompanied by traceable reasoning. Those gains compound. They just compound in the ledger labeled retention and safety, not in the one labeled “immediate FTE savings.”
How to Refuse the Reverse Centaur
If you work inside a company mid-pivot to “AI-first,” ask for authority to match your assigned accountability. If your name is on the sign-off, your team should control thresholds, escalation paths, and the right to shut a system down when it drifts. You want auditability by default, not as an add-on. Push to measure the real unit economics: the time you spend triaging model errors, the rework rate, the downstream cost of bad outputs. Those are not soft metrics; they’re the difference between a tool and a liability. If you’re a manager, resist vanity dashboards that celebrate throughput while hiding the cleanup cost. Spend the political capital to state the obvious: shipping more errors faster is not productivity.
And if you’re in a creative field, remember the power of authorship. Contract for it. Organize around it. The law already acknowledges that the work only becomes an asset when a human makes it. Use that fact to design workflows that treat AI as a drafting table, not a byline.
Why This Was Yesterday’s Big Employment Story
Plenty of essays litigate whether today’s systems are smart enough to do X. Doctorow’s piece is consequential because it explains why companies behave as if the answer is yes even when it’s no. The force moving people out of jobs isn’t a sentient machine; it’s a financing story that needs a labor-saving headline. Tie that to a practical labor remedy—industry-level bargaining to set sane guardrails—and you have a rare combination: diagnosis and treatment. In a sentence that should be taped to every conference room door, he writes, “AI can’t do your job.” The danger, and the hope, is in what leaders decide to do with that admission.
If we’re lucky, the cycle ends with fewer accountability sinks, fewer hollow promises, and more honest centaur work. If not, we’ll keep pretending that a model can replace judgment, right up until the bill for the cleanup arrives—and the only people left to pay it are the ones the spreadsheet said we didn’t need.
Read Cory Doctorow’s full argument in The Guardian: AI companies will fail. We can salvage something from the wreckage.

