In 1979, tucked inside an IBM training manual, there was a slide that feels like it was written for this exact moment we find ourselves in today. It read:
"A computer can never be held accountable. Therefore a computer must never make a management decision."
It’s a simple black-and-white rule from a green-screen era. But as we navigate what I am calling the "Great AI Labor Squeeze"—where we’re all being asked to do more with less by leaning on AI—we are hitting the limits of those systems. We’ve spent the last two years over-indexing on AI for management decisions, followed by a push to AI in line work, and the cracks are starting to show.
Pain with no gains
The data from late 2024 and early 2026 tells a story of an over zealous push to leverage AI to replace people, which has backfired:
When Failure is "No One’s Fault"
We are currently living in an accountability vacuum, where companies have outsourced a number of the messy, human parts of running a company—hiring, performance filtering, and even layoffs—to algorithms. Managers are over-indexing on AI because it provides a buffer. It’s easier to say "the system flagged this" than to say "I am making this difficult choice."
This can create large systemic problems internally, which is then multiplied when unsupervised systems are exposed to customers. When an AI-driven background check system creates a bias scandal, or an automated "performance optimizer" fires a top performer because of a data glitch, who do you call? You can't put an LLM on a Performance Improvement Plan, but you can certainly be sued for false promises made by AI.
Re-Hiring in 2026
We’re starting to see the pendulum swing back in surveys. Gartner and Forrester are both predicting a massive "quiet re-hiring" phase. Why? Because as companies try to "do more with less," the "less" (the humans) are burning out, and the "more" (the AI) is hallucinating at scale. Just this month, IBM announced they will be tripling entry level hiring, and it's about time.
We are entering a phase where the most successful managers won't be the ones who know the most prompts; they are the ones who are willing to put their name on the line when the AI is wrong. We’re going to see an increase in "Human-in-the-Loop" mandates, not because of ethics, but because of liability.
The Bottom Line
AI is a brilliant co-pilot, but it’s a terrible captain. If it can't be held accountable, it can't make the final call. The workforce of 2026 doesn't need more AI integrated workflows—it needs to pair that with good old fashioned human accountability.