Every few months, someone predicts AI will carpet-bomb the job market and leave us all learning how to grow potatoes by 2025. The latest headline to make the rounds: record unemployment next year thanks to AI. It’s a great way to goose clicks and juice dopamine. It’s also not how technology diffusion, corporate governance, or basic economics work.
The real story is more boring, more nuanced, and way more important: AI is already rewriting how work gets done, but it’s doing it with a scalpel, not a nuke.
Let’s start with what credible data actually says, because the vibes are outpacing the facts. The IMF estimates that about 40% of global jobs are exposed to AI; in advanced economies, that exposure stretches closer to 60%. Importantly, “exposed” doesn’t mean “deleted.” Roughly half of that exposure is complementary—AI makes a human more productive—and half is substitutive—AI can do significant chunks of the job on its own. Goldman Sachs pegs the share of tasks that genAI could automate across the US/EU at roughly a quarter, and models a potential displacement of around 7% of jobs in the long run if companies go all-in, tempered by new roles and growth from productivity gains. McKinsey thinks genAI could add $2.6–$4.4 trillion in annual productivity globally, with 50% of today’s work activities likely automated sometime between 2030 and 2060, with the median being 2045. The World Economic Forum’s 2023 report sees a net loss of about 14 million roles by 2027 globally—2% of current employment—because while 83 million roles shrink or vanish, 69 million new ones emerge.
That forward guidance isn’t a lullaby, but it’s nowhere near “Dust Bowl 2: Algorithmic Boogaloo.” For the US to notch “record unemployment” driven by AI alone by 2025, you’d need something close to instantaneous, economy-wide adoption with perfect integration, zero compliance friction, cheap infinite compute, reliable models that don’t hallucinate, and executives who collectively decide to replace people en masse rather than redeploy them. Meanwhile, the historical unemployment “records” you’re trying to beat are 14.7% in April 2020 and roughly 25% in the Great Depression. Tech adoption simply doesn’t bend like that on command.
What does bend? Job content. Workflows. Org charts. That’s already happening, and if you’re in the blast radius, you can feel it.
The first pain points are exactly where you’d expect: high-volume, formulaic, knowledge-adjacent tasks. Think frontline customer support, L1/L2 help desk, routine back-office paperwork, entry-level data analysis, basic ad copy and content variants, paralegal research, compliance monitoring, QA, even chunks of software development like writing tests and boilerplate. In these lanes, genAI behaves like a relentless junior who works 24/7 and never complains. That shrinks the need for as many actual juniors.
Before you pour one out for the entire white-collar class, there’s the other half of the story: augmentation beats annihilation in a lot of domains. Healthcare is a high-friction, high-liability puzzle that benefits from AI triage, scribing, imaging assistance, and population-health analytics—but it still runs on human judgment. Skilled trades enjoy diagnostic and planning copilots, not robotic plumbers en masse. Education gets AI lesson planning, formative assessment, and personalized feedback—teachers aren’t disappearing. Engineering, sales, operations, law, finance, and product all have high-leverage use cases where AI shaves hours off grunt work and frees up time for stakeholder management, strategy, and actual decision-making. The common thread across the “safe-ish” jobs isn’t fairy dust; it’s the premium on domain expertise, accountability, and context.
So if mass unemployment is unlikely next year, why do layoffs keep pairing themselves with AI on earnings calls? Because AI is the poster child for a broader efficiency story CFOs have wanted to tell for a while. Post-pandemic overhiring met higher interest rates, shareholder impatience, and boards demanding an AI plan. Put those together and you get rolling restructurings framed as “AI-driven transformation.” Some of that is real—teams that integrate AI can do more with fewer people—but a lot is timing and optics. “We downsized the org” sounds better when you can point to a shiny new toolkit.
The speed limit here isn’t just willpower. Integrating AI into actual, regulated workflows is a buzzkill of the first order. You need data pipelines, privacy and retention policies, model governance, risk and controls, human-in-the-loop designs, change management, and continuous monitoring. You also need to stop models from making things up at scale. None of that happens because a VP copied a prompt template from LinkedIn. The companies that quietly win are the ones building boring plumbing: retrieval pipelines tied to sanctioned data, access controls, audit trails, reviewed prompts, calibrated error budgets, and a small catalog of use cases with measurable KPIs. That’s not a doomsday timeline; it’s a two-to-three-year grind.
Meanwhile, the macro constraints matter. Compute isn’t free or infinite. Data quality is all over the place. Enterprise vendors keep changing their pricing and platform story every quarter. Regulators are circling. Users don’t adopt tools they don’t trust. Every single one of these frictions slows the “replace everyone by Q4” fantasy.
Now, let’s call a few shots—because yes, there will be winners and losers.
Software teams get smaller, more senior, and more leveraged. Copilots write the boring bits, surface patterns, and accelerate refactors. Great engineers become force multipliers. Weak ones get exposed—and fast.
Ops, finance, and compliance teams that master automation put distance on peers. The work doesn’t vanish, but the baseline shifts: fewer bodies moving spreadsheets, more humans supervising systems, validating outliers, and handling gnarly edge cases.
Customer support consolidates. Multi-lingual AI agents handle tier-0 and tier-1 with escalating guardrails; humans do the hard stuff and the empathy. Brands that do this well can improve CSAT while shrinking wait times. Brands that do this lazily will ship a rage machine.
Marketing gets weird. Content volume explodes, so distribution, targeting, and brand voice become the moats. First drafts are free; judgment costs more.
And yes, a lot of entry-level paths narrow in the short term. That’s not fair, but it is predictable. We’ve had decades where “pay dues doing repetitive work” was the on-ramp. AI is welding shut some of those doors. The replacements are going to look more like apprenticeships, project-based showcases, and proof you can operate a stack, not just talk about it.
If you run a company, you don’t need a TED Talk; you need a playbook. Start by auditing tasks, not jobs. Map the top ten workflows that eat time and carry measurable cost or risk. Pilot augmentation where error tolerance is high and feedback cycles are short—internal knowledge search, email triage, code assist, reporting, QA. Build governance in from day one instead of duct-taping it together later. Track real metrics—cycle time, error rate, rework, NPS, margin. If a pilot doesn’t move a KPI, kill it without ceremony. If it does, productize and scale. And for the love of everything holy, invest in some fucking training that actually changes behavior. Tool fluency, prompt design, and data literacy aren’t a vibe; they’re the new Excel.
If you’re a worker, this is the part where you decide whether AI is a threat, a tool, or a trampoline. Get fluent in the two or three AI systems that directly touch your role. Build repeatable workflows. Quantify your leverage: show the hours saved or revenue generated because of how you use these tools, and bring those receipts to comp conversations. Climb the stack toward problems that require judgment, context, and ownership. If your day is 80% things that can be templatized, you’re on borrowed time. Move.
So where does that leave the “record unemployment by 2025” crowd? With a story that confuses real disruption for instant apocalypse. That confusion isn’t harmless—it scares people, distracts leaders, and encourages lazy decisions. But it also misses the opportunity staring everyone in the face. The next 24 months aren’t about who screams the loudest on social about AI’s destiny. They’re about who quietly wires this tech into core processes, measures the hell out of it, protects against obvious failure modes, and uses the gains to build better products with fewer mistakes.
AI won’t crater the job market by next year. It will, however, reprice mediocrity. It will compress the middle, remove a lot of busywork oxygen, and raise the bar for what “good” looks like across white-collar work. That’s brutal in spots, but it’s also where the productivity upside lives. Adaptation is not optional, and it’s not rocket science either: build, measure, learn—without burning the house down.
Record unemployment? No. Record job reshaping? Already here. (via BGR)
No Comments