Experience Outruns the Algorithm
For three years, the loudest question in offices has been whether AI would erase jobs wholesale or simply rearrange them. Yesterday, Business Insider put a hard number on the rearrangement, translating a Dallas Fed analysis into something uncomfortably clear: the machines are not flattening the labor market; they’re steepening it. The people climbing fastest are not the interns with perfect prompt hygiene but the veterans who already know where the bodies are buried in a workflow.
The research note, by J. Scott Davis of the Dallas Fed, looks at the post-ChatGPT period and finds a pattern that feels both counterintuitive and inevitable at once. Since late 2022, overall U.S. employment has risen roughly 2.5%. In the sectors most exposed to AI, employment has actually dipped about 1%. At the same time, wages in those exposed sectors have grown around 8.5%, outpacing the 7.5% national average. If AI were simply substituting for humans in a one-for-one rout, you would not expect pay to rise faster where the robots are thickest. But pay is rising—just not for everyone.
Business Insider’s read of the Fed’s evidence is straightforward: older, experienced workers in AI-exposed occupations are doing relatively well; younger workers, especially those under 25, are not. Behind the averages sits a mechanism that finally gives structure to the anecdotes: AI is excellent at reproducing codified knowledge—the steps, templates, and textbook answers that form the scaffolding of entry-level work. It is stubbornly worse at the kind of tacit knowledge that senior people accumulate through trial, error, and scars—the judgment calls, the timing, the unwritten protocols, the “this looks right but it isn’t” sense that can’t be captured by a manual. The returns to that tacit layer are rising, and so is the bargaining power of those who hold it.
The Machine Eats the Manual, Not the Mentorship
Davis takes the intuition and quantifies it. In occupations where experience has historically commanded a premium, greater AI exposure since 2022 is associated with faster wage growth. Where experience has mattered less, AI exposure correlates with weaker wage growth. That’s the map: where the job is mostly codified, substitution risk. Where it is heavy on tacit skill, augmentation upside. The machine devours the manual; it does not replace the conversation after the meeting where the real decision got made.
When that map overlays a company’s org chart, the changes look like productivity, but the deeper shift is the reallocation of power. Senior staff can now offload rote setup to tools and spend more of their attention on high-stakes calls. Their output looks better; their unique contributions become more legible; their leverage increases. Meanwhile, the traditional on-ramp for juniors—doing the codified tasks that gradually morph into judgment-bearing work—shrinks. If AI handles the tasks that once taught you how the work actually works, where do you get your reps?
The First Rung Is Where the Friction Is
The Dallas Fed’s note doesn’t just describe a skew; it describes a bottleneck. Workers under 25 in AI-exposed fields are seeing weaker employment outcomes. That’s not some vague cultural anxiety about Gen Z and attention spans. It’s a structural fact: if the codified layer is automated, the learning-by-doing route closes. Trimming the bottom rung is easy in a spreadsheet and unsustainable in reality. A firm can run “senior-heavy” for a few cycles, but without a pipeline, institutional memory ages and thins. When the veterans leave, they take the tacit playbook with them, and it does not live in Confluence.
What’s novel here is that a central bank research note is flagging an apprenticeship problem as a macro signal. We are used to thinking about AI in terms of tasks. The Dallas Fed has quietly redirected the conversation to rungs: who gets access to the contexts where tacit knowledge forms, and how quickly can they climb once there?
What the Wage Pop Really Means
That 8.5% wage growth in AI-exposed sectors is the market’s way of saying: seasoned judgment that can harness AI is scarce and newly valuable. It doesn’t absolve firms from hard choices; it does expose a hidden cost of “junior-light” strategies. If you redesign work so that seniors do more with AI, you increase throughput. But you also concentrate fragility. A team that runs on three people instead of seven looks efficient until one exits and you discover the role was propped up by improvisations no model recorded.
So the corporate problem is not just reskilling. It is job architecture. If AI handles the manual, you have to invent new substitutes for the tacit rehearsal juniors used to get by doing that manual. Shadowing without stakes won’t cut it; expertise is context plus consequence. That means creating supervised exposure to messy, real situations where the judgment muscles actually form—client interactions, live incidents, ambiguous data, tradeoffs that pinch. It means pairing juniors with AI in ways that escalate ownership over time rather than trapping them in prompt-chasing piecework.
The Hidden Accounting for Tacit Capital
Organizations already measure software licenses, cloud spend, and headcount. They rarely measure tacit capital: how many people can resolve an issue that has no playbook? How quickly does that capacity replenish when seniors rotate or retire? The Fed’s “experience premium” result is a prompt to start counting. It suggests new KPIs: time to independent judgment for new hires, ratio of codified-to-tacit exposure in early roles, and actual career velocity for under-25 employees in AI-heavy teams. If those curves flatten while output headlines look great, you’re financing this year’s productivity with next year’s capability.
There is also a cultural recalibration buried in these findings. For the last decade, many companies optimized for interchangeable talent and codified processes that could scale. AI doubles down on that logic for the bottom of the stack. But the top of the stack suddenly matters more, not less. Craft re-enters the room. The people who can read an unstructured situation and steer it are now the constraint. We don’t have a great language for that in corporate planning because it sounds suspiciously artisanal. Yet, per the data, that’s where the wage signal is pointing.
Designing New Ramps Without Lying to Ourselves
The easy answer is “simulate the messy parts” or “let AI tutor the juniors.” Those will help, and they should be tried. But simulated ambiguity is still scripted. The political economy of a live project—misaligned incentives, partial information, reputational risk—is the crucible where tacit skill forms. If firms want juniors who can someday wear the senior’s judgment, they must create protected pathways into that crucible. Think rotational programs that attach early-career employees to consequential work with real, bounded stakes; documented decision reviews where the why, not just the what, gets captured; and compensation models that reward senior staff for teaching, not hoarding.
None of this argues against AI adoption; it argues for being explicit about the organizational redesign it requires. If AI takes the manual, leaders must replace the mentorship the manual used to unlock. Otherwise, you end up with a thin layer of highly paid pilots and an empty cockpit behind them.
What Yesterday Actually Changed
Plenty of executives have warned that AI would wipe out entry-level jobs. The Dallas Fed’s work, spotlighted by Business Insider, draws a sharper contour: we are not seeing a broad layoff wave; we are seeing a decisive tilt inside firms. Experienced, tacit-knowledge workers in AI-exposed roles are pulling ahead. Younger entrants are encountering fewer ways to acquire the very experience the market now prizes. That is a solvable design problem, but only if leaders treat it as central to the AI transition, not a sentimental add-on.
In other words, the headline isn’t “AI takes jobs.” It’s “AI rewrites who benefits, and when.” Yesterday’s data turned that from a talking point into a measurable trend. If companies rebuild the first rung, they’ll convert AI’s codified strengths into durable capability. If they don’t, they’ll discover—right when they most need judgment—that the only thing harder than coding tacit knowledge is buying it overnight.

