On 26 February 2026, Jack Dorsey, co-founder of Twitter and CEO of the fintech group behind Square and Cash App, announced he was cutting nearly half of his ten thousand-person workforce because AI had made those roles unnecessary. The logic seemed airtight. Technology arrives, headcount falls, margins improve, the board applauds. He went further: within a year, he predicted, most companies would reach the same conclusion and make similar structural changes. The press release was clean. The earnings call went well. The strategy looked inevitable.
Except it isn't. And now we have the mathematics to explain why.
In March 2026, Brett Hemenway Falk and Gerry Tsoukalas, economists at the University of Pennsylvania and Boston University respectively, published a paper called "The AI Layoff Trap" on arXiv (the academic repository where researchers share pre-publication work). What they built is a formal game-theoretic model, and what it shows is genuinely unsettling for any executive currently following Dorsey's playbook.
The Trap Has a Formal Proof
The mechanism Falk and Tsoukalas identify works as follows. When a firm automates a task and cuts the associated roles, it captures the full cost saving. But under competitive pricing, it bears only a fraction of the demand destruction it causes, because that damage spreads thin across the entire market. The more competitors there are, the smaller each firm's share of the collective harm, and therefore the weaker the incentive for any single firm to hold back. What follows is a classic prisoner's dilemma (a situation where individually rational decisions lead to a collectively worse outcome for everyone): each firm does what makes sense for itself, and together they erode the consumer base they all depend on. The invisible hand does not correct for this. It accelerates it.
The authors tested six policy responses to this distortion. Capital income taxes, universal basic income, worker equity schemes, and upskilling programmes all narrow the problem but cannot eliminate it. Neither does Coasian bargaining (the idea, from economist Ronald Coase, that affected parties can negotiate their way to an efficient outcome without government intervention), which breaks down at the scale of economy-wide automation. The only instrument that fully corrects the distortion, in their model, is a Pigouvian automation tax (a levy, named after economist Arthur Pigou, set at a level that forces private decisions to reflect their full cost to others rather than just the firm's private share). Everything else narrows the gap. Nothing else closes it.
The trap describes a structural dynamic already operating in current labour markets. Knowing it exists does not stop firms from walking into it.
What the Data Actually Shows
The reversals are already arriving. Klarna replaced approximately seven hundred customer service roles with AI, claimed its chatbot could handle the equivalent of eight hundred employees, then admitted to Bloomberg by mid-2025 that the aggressive shift had produced lower quality and announced rehiring initiatives. The Commonwealth Bank cut forty-five customer service roles in July 2025 following the introduction of an AI chatbot, reinstated those same roles by August after call volumes increased rather than fell, and was forced to acknowledge, under pressure from the Finance Sector Union, that it had not adequately considered all relevant business considerations when making the original announcement. Both cases are already documented on this blog.
The aggregate picture from Australian research tells a more nuanced story than either camp wants to acknowledge. PwC's 2025 Global AI Jobs Barometer, based on analysis of close to one billion job advertisements, found that Australian industries most exposed to AI grew revenue per employee at three times the rate of less-exposed sectors. But what drove that gap was augmentation, not replacement. Over the past five years, jobs classified as augmentable (where AI enhances what a person does) grew by 47 percent across Australian industries. Jobs classified as automatable grew by 45 percent. Both categories expanded. The humans were still there, doing more. Jobs and Skills Australia reached the same conclusion from a different direction: generative AI, in their analysis of the domestic labour market, is more likely to augment human work than replace it.
"Firm Data on AI," an NBER Working Paper published in February 2026 by Ivan Yotzov, Nicholas Bloom, and colleagues from the Federal Reserve Bank of Atlanta, the Bank of England, and Macquarie University, surveyed nearly six thousand senior executives at firms across the United States, United Kingdom, Germany, and Australia. Nine in ten reported no measurable impact from AI on either employment or productivity over the past three years, despite 69 percent of firms actively using the technology. The productivity revolution being announced in earnings calls has yet to appear in the broader numbers. That gap deserves more attention than it is currently getting.
The Australian Companies Doing It Differently
The data points somewhere more interesting than either the replacement camp or the complacency camp would like. Australian organisations are already demonstrating a different approach, and the evidence is starting to accumulate.
The organisations pulling ahead share a common orientation: they treated AI as a force multiplier rather than a substitution strategy. Telstra expanded an initial AI pilot of three hundred people to a deployment covering twenty-one thousand employees across the organisation. The reported outcomes were one to two hours saved per week per employee and a fifteen percent lift in task completion. Not fewer people. The same people, moving faster. Westpac rolled out Microsoft 365 Copilot to thirty-five thousand employees, with governance frameworks that linked each initiative to clear business outcomes rather than headcount targets. Both cases share the same underlying logic: the question driving each implementation was how much more the existing workforce could accomplish, rather than how many roles could be removed.
Blue Zebra Insurance, headquartered in Sydney, makes the point from a different angle. It scaled from zero to over three hundred thousand customers in eight years with a hundred and sixty employees, roughly a quarter of the headcount a traditional insurer of similar scale would carry. Blue Zebra built with AI as infrastructure from the first day, which meant the organisation never accumulated the layered processes and headcount structures that established firms now have to work around when they introduce the same tools.
Microsoft's Economic and Social Impact Report, published in April 2026, describes these cases as representative of a second wave of AI adoption now taking shape in Australia: one focused on business reinvention and new ways of working, rather than the first wave's narrower focus on efficiency and cost reduction. A cost reduction strategy has a ceiling. A capability strategy compounds.
The Decision Available Now
Falk and Tsoukalas proved that the replacement trap is structural. The competitive logic that drives firms into it operates through ordinary market mechanics: cost savings stay private while demand destruction spreads across the whole industry. Each firm's decision is rational in isolation. The aggregate damage accumulates regardless. Which is why the correction cannot come purely from individual firm wisdom. But it also means that firms which step outside that logic, deliberately and early, build something their competitors cannot easily replicate: teams that compound in capability rather than shrink in cost.
PwC's data suggests Australian business leaders understand this, at least at the level of stated intent. Fifty-six percent of Australian CEOs believe AI will be integrated into core business strategy within three years. Forty-two percent say they have already seen a measurable lift in employee productivity through AI adoption. The question is whether that productivity is being directed toward replacing people or toward expanding what those people can accomplish. One path leads to a ceiling. The other compounds.
At Tricore Tech, we see this distinction in how Australian organisations frame their AI questions. The ones asking "how much can AI replace?" are building toward a ceiling. The ones asking "what becomes possible when our people have AI behind them?" are building toward something harder to replicate and slower to erode. Our conviction, captured in "Beyond Digital. Genuinely Human," follows from where the evidence points. The trap is real. The exit from it is a decision, made deliberately, before the competitive pressure makes it feel like there is no choice left.