Washington Tries Something Radical: Make AI a Negotiation, Not a Notice
On Sunday, a quiet post from OnLabor spotlighted a loud idea out of Washington state: stop treating artificial intelligence like just another piece of office equipment and start treating it like a decision that belongs at the bargaining table. House Bill 1622 would do exactly that. If a public agency wants to roll out or change an AI system that could touch wages or performance evaluations, management would no longer get to move first and bargain later. It would have to negotiate the decision itself.
That one pivot—decision bargaining instead of impact bargaining—sounds technical. It is. But it is also a rearrangement of power with real-world consequences for how AI actually lands in public-sector workplaces. For decades, Washington law has placed “use of technology” squarely in management’s domain. HB 1622 carves AI out of that domain and pulls it within the scope of collective bargaining. In a field where policy has mostly chased outcomes after the horse has left the barn, this bill tells the stable hand to keep the gate shut until everyone agrees on the plan.
What changes when AI has to be bargained?
Start with the definition. The bill paints AI broadly—machine-learning systems trained on data to perform tasks typically associated with human intelligence—and then trims only lightly. Routine third‑party software updates that don’t meaningfully affect compensation or evaluations are spared; everything else is in play. That breadth matters, because it narrows management’s path for simply rebranding tools as “analytics” or “automation” and pushing them through. It pre-commits agencies to conversations about design choices, evaluation metrics, and safeguards before the first model gets a login.
Consider how this rewires procurement. Today, an agency might select a scheduling model that redistributes overtime, or a triage tool that reshapes caseload complexity, and then tell workers the system is live while promising to mitigate the fallout. Under HB 1622, the gating item becomes the negotiation itself. Unions gain leverage to demand pre-deployment testing with representative data, audit rights over model behavior, transparent performance thresholds, and funded training—features that can determine whether a tool augments workers or quietly changes the job into something else. Vendors, in turn, would need to show their work. “We’ll fix it in a patch” stops being a satisfactory answer when the patch triggers another round of bargaining.
This is not abstract. Compensation and evaluations are where AI’s invisible hand becomes very visible. A forecasting model can subtly lower overtime by smoothing schedules. A risk score can shift which cases reach frontline staff and how quickly they’re closed. An automated evaluator can nudge performance ratings down, dollar adjustments in tow. When these systems arrive unilaterally, the negotiation happens after incentives and expectations have already hardened. Bargaining up front forces a different conversation: what problem are we solving, how will we measure success, and who is accountable when the system drifts?
Washington as a policy lab
HB 1622 has already cleared the state House on a 58–38 vote and moved to the Senate Labor & Commerce Committee, with staff summarizing an expected split screen: advocates emphasizing worker voice and reliability, opponents warning about management rights, cybersecurity, and cost. The bill also includes a “null and void” clause, meaning it won’t take effect without funding—an honest admission that participation, testing, and transparency all cost money. As of the OnLabor write-up, it isn’t law yet. But the contours are visible enough to study.
Zoom out and the bill looks less like a one-off and more like a waypoint in a bigger map. States and cities are stitching guardrails around AI in hiring, evaluation, and public service delivery, sometimes because federal policy hasn’t settled, sometimes because local stakes are immediate. New York’s constraints on automated decision tools created a compliance industry almost overnight. Washington’s move targets a different lever: the timing and locus of consent. It tells public employers that AI is not a mere workflow upgrade; it is a structural choice that must be co-authored with the people whose performance and pay it will influence.
The edges where the fights will happen
The bill’s clean line—“if it touches pay or evaluation, you bargain the decision”—invites new gray areas. What counts as “meaningfully affects” when a vendor ships an update that tweaks model calibration? How do agencies handle emergency patches that close security holes but also nudge outputs? Expect definition games around what is or isn’t “AI,” despite the bill’s wide framing. Expect procedural plays too: if bargaining becomes a bottleneck, will agencies break implementations into smaller steps to slide under the threshold?
There are governance tensions baked in. Management rights aren’t a relic; agencies do need to move fast sometimes, and cybersecurity is not a spectator sport. Yet speed without legitimacy is how you get resistant users, shadow tools, and litigation. HB 1622 doesn’t prevent deployment; it makes deployment contingent on negotiated design and accountability. For workers, that’s a path to making AI legible and corrigible. For management, it’s a slower road to a sturdier destination—systems that survive contact with reality because the people using them had a hand in building the guardrails.
If this model spreads
Public-sector AI would start to look different. Procurement templates would include audit clauses and model cards as standard attachments. Training budgets would be bundled with licenses. Evaluation schemes would be co-developed, including the right to contest algorithmic outputs with human review. Agencies would face pressure to document how they validated datasets against local populations rather than importing out-of-state risk scores and hoping for the best. And the softest change might be the most important: a habit of asking whether this tool is displacing judgment or supporting it, and adjusting the implementation accordingly.
There are trade-offs. Decision bargaining can slow unilateral deployments, and delay carries opportunity costs—especially when tools promise backlog reduction or fraud detection. But speed is not neutral; it distributes power. In workplaces where AI can quietly restructure who gets what work, who is rated how, and who advances, front-loading negotiation is not just process. It is the mechanism by which a workforce chooses whether AI is a substitute, a supervisor, or a collaborator.
The signal beneath the statute
Washington’s sponsors have pitched the bill as ensuring AI benefits state employees and residents alike. Read that as politics if you wish, but it is also a design principle. AI in the public sector is increasingly less about dazzling capabilities and more about legitimacy: can the people affected see how the system works, challenge it when it doesn’t, and trust that it won’t quietly rewrite their job description? HB 1622 answers with a table and chairs. Before the model ships, sit down.
The headline from Sunday isn’t that a state discovered AI. It’s that a state is experimenting with who gets a vote when AI arrives. If Washington funds and enacts this—and if other states import the idea—the national posture shifts. Instead of treating AI as a managerial prerogative moderated by after-the-fact grievances, we start treating it as a shared choice made at the point where code meets compensation. That won’t solve every problem, or stop every bad system. But it changes the default, and defaults are destiny in bureaucracies.
Watch the budget line. Watch the Senate committee. And watch the vendors retool their pitch decks for a world where the real demo happens across from a bargaining team.

