Insight
April 22, 2026

Two Big 4 firms, two opposite bets on AI agents. What the research actually says.

PwC is cutting US entry-level hiring by a third. EY just deployed AI agents to 130,000 auditors and kept headcount flat. Two Big 4 firms, two opposite bets on what AI agents mean for people. Here's what the MIT research actually says about which approach works.
In the space of a few months, two of the world's largest professional services firms made very different calls on what AI agents mean for their people. Same technology. Same industry. Opposite strategies. Here's what the research suggests about which bet ages better.

In the space of a few months, two of the world's largest professional services firms made very different calls on what AI agents mean for their people.

PwC is reducing entry-level hiring in the US by roughly a third. An internal presentation shared with the firm's alumni network projected a 32% drop in tax and assurance associate hiring between 2025 and 2028, and a 39% drop in audit. PwC cited three drivers: AI and automation absorbing routine entry-level work, acceleration centers offshoring associate tasks to lower-cost markets, and historically low attrition rates (ManagementConsulted, 2025).

EY went in another direction. In April 2026, the firm announced an enterprise-scale rollout of agentic AI across its 130,000 Assurance professionals in more than 150 countries. The stated goal, in the words of Global CEO Janet Truncale, is a "human-led, AI-powered audit of the future." Headcount stays. EY is running a global training program through 2026 to upskill every auditor and technology risk professional alongside the new tools (EY newsroom, April 2026).

Same technology. Same industry. Two very different assumptions about what AI agents mean for people.

Neither firm has made a mistake. They're running different experiments, and the outcomes will tell the story over the next few years. But the research on what actually makes AI deployments work is worth sitting with before anyone else in professional services follows one of these paths.

The research leans toward augmentation, not replacement

New research from MIT Sloan, published in March 2025, draws a clear line between automation and augmentation. Automation transfers a task from a human to a machine. Augmentation uses the machine to make the human more productive, often on tasks they couldn't do well before. The researchers argue that for a wide range of jobs, augmentation produces more value than full automation (MIT Sloan, 2025).

A separate MIT Sloan piece on generative AI and productivity found something practical for firms deploying these tools. Workers get more value out of AI when there's a structured onboarding phase where they learn where the AI works well and where it doesn't, and when managers reconfigure roles around the new tools instead of layering AI on top of existing workflows (MIT Sloan, 2026).

MIT's 2025 State of AI in Business report added a more sobering finding. Despite high adoption of tools like ChatGPT and Copilot, most enterprise AI projects have not moved the P&L. One reason: the tools forget context, don't learn from corrections, and can't evolve with the business. For mission-critical work, 90% of users still prefer humans (MIT Media Lab via Mind the Product, 2025).

These three findings don't prove either EY or PwC wrong. They do suggest that the value of AI in knowledge work depends heavily on the humans around it, the workflows they operate in, and the feedback they give the system.

The argument for PwC's approach

There's a real case for PwC's direction. Routine entry-level work in audit and tax, including reconciliations, sampling, first-pass document review, and compliance checks, is genuinely well-suited to agents. PwC's leadership has been transparent that these tasks are being handled by AI or offshored to acceleration centers, and the firm is investing heavily in upskilling the people who remain.

PwC's AI assurance leader Jenn Kosar told Business Insider she expects new hires to be performing today's manager-level work within three years. If that timeline holds, a smaller, more skilled intake makes economic sense. The firm isn't abandoning junior talent. It's redefining what junior means.

Worth noting: PwC's own Global Workforce Hopes & Fears Survey 2025, based on responses from 9,394 entry-level employees across 48 economies, found that entry-level workers are more curious (47%) and excited (38%) about AI than worried (29%) (PwC, 2026). The workforce isn't resisting the shift. The open question is whether the new, smaller intake produces enough seasoned practitioners three, five, and ten years from now.

The argument for EY's approach

EY's bet assumes that keeping the people makes the technology better, not just that it protects the pipeline. The firm's public positioning is explicit: agents handle orchestration and routine tasks, humans retain judgment, skepticism, and final sign-off. That framing matches what MIT Sloan's augmentation research recommends.

It also fits with a practical reality in audit and assurance. Agent output needs to be traceable to human judgment under ISA and PCAOB rules. Regulators are watching closely. The UK's Financial Reporting Council has already signaled closer scrutiny of AI in audit methodology (ResultSense, 2026). For highly regulated work, having more people close to the agents, correcting them, refining them, catching edge cases, is a governance advantage as much as a quality one.

The harder part of EY's bet is the one nobody can validate for two or three years. Keeping headcount flat while deploying agents only produces real leverage if the firm actually redesigns how work gets done. If juniors keep doing what they did before and the agents become one more tool to fight with, the productivity gain disappears.

What this actually means for firms outside the Big 4

Most audit, accounting, M&A, and insurance firms won't be rolling out 130,000-user agentic platforms this year. But the same fork in the road is coming to every firm considering AI, just at smaller scale.

The question isn't "how much headcount can we cut?" It's "who inside the firm is actually positioned to teach the agents what good looks like in our work?"

If that's a small number of senior people who are already stretched, the rollout will plateau. If it's a broader group of practitioners using tools designed for correction and feedback, the system gets better every month.

The MIT research suggests a few practical starting points. Invest in the onboarding phase where people learn what the tool does well and badly. Reconfigure roles around the tool instead of layering it on. Keep humans in the loop for anything where a mistake is costly. Measure whether productivity actually moves, not just whether the tool got adopted.

The honest conclusion

EY and PwC have placed different bets, and it will take years to know which strategy ages better. Both firms have credible reasoning. Both face real execution risks.

What we know from the research is that the firms getting the most out of AI tend to be the ones that treat it as a complement to skilled people, invest in structured learning, and redesign workflows around the technology rather than on top of it. That applies whether you have 130,000 auditors or thirty.

For firms building on Alkmist, the pattern we see most often is the augmentation one. Teams keep their people, give them better tools to collaborate with clients and each other, and let the software handle the chasing, tracking, and routine follow-ups that used to eat the junior week.

It's one model among several. But it's the one with the most research behind it, for now.

Sources: EY Global Newsroom (April 2026); ManagementConsulted (August 2025); MIT Sloan School of Management (March 2025, January 2026); MIT Media Lab "State of AI in Business" (2025); PwC Global Workforce Hopes & Fears Survey 2025; ResultSense (April 2026).

Multi party collaboration, simplified.
Talk to our founders today!
Talk To Our Founders
Continue reading
News
Signal S1.EP3: Your junior consultant is now an AI agent
One big move from PwC, one quiet move from us. 73% of clients now expect real-time visibility into project status, while 89% of services leaders still manually verify AI outputs before sharing them. This Signal unpacks PwC's launch of PwC One, what mid-market firms should take from it, and a stat that might make you rethink how your team works with AI.
Read article
Architects
BIM fixed the model, email still breaks the project. Why architecture firms lose time on external collaboration.
BIM solved internal model coordination. But the moment architects need something from a client, consultant, or contractor, everything reverts to email. Research confirms that poor stakeholder communication is the single biggest barrier to effective BIM adoption.
Read article
Accounting
Why 97% of Accounting firms use technology inefficiently
Nearly half of all accounting firms say their technology is increasing manual work, not reducing it. The problem isn't the software. It's the unstructured pipe underneath it. This article breaks down where the real bottleneck lives and what high-growth firms are doing differently.
Read article