A 2025 METR study found mid-career engineers are more likely to be slowed down by AI than sped up in their area of expertise, while junior and senior engineers benefit more, creating a structural disadvantage unique to this career stage.
Every technical moat engineers have bet on since 2021 has been crossed within one to two years. The large codebase moat is the current declared boundary, and betting on it holding is a bet against a consistent five-year trend.
The answer is not to abandon expertise but to redirect it one level up: from writing code to directing, evaluating, and governing AI agents, with mental simulation as the core human differentiator AI cannot replicate.
If you have tried AI coding tools and found them more trouble than they are worth, you are not imagining things. And you are not alone.
A 2025 study by METR found that developers with deep expertise in a codebase were more likely to be slowed down by AI than sped up, because they spent more time correcting subtle errors than the tools saved them. Meanwhile, junior engineers and senior leaders are getting faster. Mid-career engineers are caught in the middle, and most do not realize the gap is structural, not personal.
This is not a temporary adjustment period. It is a closing trap, and understanding why it exists is the first step to getting out of it.
Table of Contents
The Study That Explains Why AI Is Slowing You Down
In early 2025, METR, a model evaluation and threat research organization, published a study on how AI affects developer productivity across different experience levels. The finding that matters most for mid-career engineers is this: on tasks where developers already had deep expertise, they were more likely to be slowed down by AI than sped up, because they had to babysit and correct the model when it made subtle errors in their domain.
The opposite was true for developers working in unfamiliar codebases. For them, AI provided a substantial speed boost.
This creates a counterintuitive dynamic. New engineers, who are working in unfamiliar territory almost by definition, benefit from AI assistance. Senior engineers and engineering leaders, whose roles are broad and forward-thinking by design, have the freedom to experiment with new tools and benefit from AI\’s ability to accelerate work in adjacent areas. Mid-career engineers, who have invested years building deep expertise in specific systems, are the ones most likely to find AI frustrating, unreliable, and slower than doing it themselves.
Stanford University research found that employment among software developers aged 22 to 25 fell nearly 20% between 2022 and 2025, coinciding with the rise of AI-powered coding tools. The pressure on engineers at every level is real.
The Freedom Gap Makes It Worse
The research finding is only part of the problem. The other part is structural.
Junior engineers are expected to ramp up. Companies build in time for them to explore technologies and develop skills across domains. Senior and principal engineers are given latitude because their roles are explicitly forward-thinking and broad. Experimentation is part of their job description.
Mid-career engineers are measured primarily on delivery. Their value is defined by what they can ship based on existing expertise. Any time spent experimenting with new tools directly reduces the output they are evaluated on. They are, in effect, trapped: they cannot afford to experiment, they gain less from AI in their current domain than anyone else, and every week they hear junior engineers and senior leaders talking about the remarkable things they are accomplishing with the same tools.
The sunk cost fallacy compounds this further. A mid-career engineer has spent years building expertise. Walking away from that investment to pivot into something new feels like writing off everything they have built. Even when the rational case for pivoting is clear, the psychological case for doubling down feels compelling.
Why the Moat Strategy Does Not Work
The natural response to this situation is to double down on specialization: build a deeper moat around expertise that AI cannot cross. Garrett walks through why this has been a losing strategy at every stage of AI\’s development.
Eighteen months ago, AI usage was mostly for code generation and tab completion. This year, 55% of developers regularly use AI agents, a massive jump. The capability trajectory has been consistent.
In 2021, AI could only write toy snippets. GitHub Copilot could correctly fill in Python function bodies 57% of the time after ten tries. The moat seemed to be producing actually working code. That moat was crossed in 2023 when GPT-4 could write a correct function on the first try 85% of the time.
The next moat was editing across files. Cursor and GitHub Copilot crossed that in 2024. The moat after that was meaningful independent contribution: could an agent be assigned a bug and come back with a working solution? In 2025, Copilot gained exactly that capability. Andrej Karpathy coined the term vibe coding to describe the creation of working software through natural language requests alone.
As of early 2026, the declared moat is large codebases: systems with 100,000 or more lines that require sustained coherent reasoning across the entire codebase. Claude Code achieved a 77.2% score on SWE-bench Verified, solving real-world GitHub issues end-to-end, and maintains coherence through 30-plus hour complex, multi-step coding workflows.
“Every year, I keep thinking that LLMs may have hit a wall. Given that they are just next-token predictors, LLMs really have no business being as good as they are. Then they burst right on through it every single year.”
Around 41% of all code written in 2025 is AI-generated, with current trajectories suggesting crossing 50% by late 2026 in organizations with high AI adoption. Betting on the large codebase moat holding is a bet against a consistent five-year trend of those bets being wrong.
The Answer: Move Up the Abstraction Ladder
The history of software engineering is a history of rising abstraction. Punch cards gave way to assembly. Assembly gave way to high-level languages. Each transition required engineers to stop doing things the old way and start operating at a higher level. The engineers who resisted each transition did not benefit from it. The engineers who moved up with it did.
The current transition is the same: code is the new low-level operation, and working with AI agents is the new high-level language. Development is shifting toward focused working sessions where instead of writing code, developers write specifications that agents implement.
“Just like the industry had to move from punch cards to assembly to high-level code, now we have to move on to interacting with AI agents. Code is now low-level. Working with agents is the new high-level language.”
This is not about abandoning technical expertise. It is about applying that expertise at a different layer of the stack. The engineers who understand how systems actually work are the ones who can direct agents effectively, catch their errors before they compound, and architect workflows that are reliable at scale. Deep expertise becomes an asset again at this level, but only for engineers who have made the transition.
Mental Simulation: The Human Differentiator That AI Cannot Replace
Moving up the abstraction ladder requires understanding what humans can do that AI agents genuinely cannot. Garrett identifies this as mental simulation: the ability to build accurate models of how complex systems will behave, reason about architecture, and predict how a system will react to novel inputs.
LLMs operate on patterns. They are extraordinarily good at matching and generating within those patterns, but they cannot analyze complex causality or reason about emergent system behavior the way a human engineer with a mental model of the system can. Context-driven engineering forces engineers to become good architects and good product designers again. If you cannot explain what you want to do and how you plan to do it, AI will not save you. It will just produce technical debt at industrial speed.
This is what makes mental simulation the core software engineer AI skill for the next decade. Applying it in the context of agentic systems specifically means:
- Understanding the scale of data and what an LLM can manage within its context window.
- Thinking about how data should be pre-digested so agents are not overwhelmed.
- Knowing how information needs to be structured and passed between components so the system holds together.
- Understanding where an agent\’s confidence is likely to diverge from its accuracy, which determines where human oversight is non-negotiable.
Successful engineers in 2026 are those who can master AI orchestration, coordinate multiple AI agents for complex workflows, and serve as a bridge between business requirements and technical specifications. These are not new skills in isolation. They are existing engineering skills applied to a new context, which is exactly what makes the transition achievable for mid-career engineers who are willing to make it.
What This Means Practically
The pivot Garrett is describing does not require abandoning years of expertise. It requires redirecting that expertise toward a higher-level problem: building, directing, and governing AI systems rather than implementing the systems directly.
Engineers describe developing intuitions for AI delegation over time. They tend to delegate tasks that are easily verifiable or low-stakes, while keeping conceptually difficult or design-dependent tasks for themselves or working through them collaboratively with AI. That judgment, knowing what to delegate and what to keep, is itself a skill that compounds with experience. Mid-career engineers who make the transition bring that judgment with them. Junior engineers have to develop it from scratch.
The window to make this transition while still being ahead of the curve is open, but it is not unlimited. The engineers building fluency with agentic systems now will be the ones defining what senior engineering work looks like in two or three years.
Building the Skill Set Systematically
Most engineers trying to learn agentic AI face the same problem: piecing together fragments from tutorials, documentation, and YouTube videos does not produce the systems thinking required to operate effectively at this level. It produces familiarity with individual tools without the architectural judgment to use them well.
Interview Kickstart\’s Agentic AI Career Boost Program is the structured path Garrett joined as an instructor. The course focuses specifically on systems thinking, hands-on project work, evaluation frameworks, and guardrails, taught by FAANG instructors and practitioners who have shipped real production systems. It is designed to build the kind of mental models that make agentic AI a career accelerator rather than another tool to keep up with.
The free webinar is the right first step. It covers the full curriculum, what the program builds toward, and gives you direct access to the team before you commit.
The transition from writing code to directing agents is not comfortable. But the engineers who make it now will not be the ones asking in two years whether they should have started sooner.
FAQs
1. Why do mid-career engineers struggle more with AI tools than junior or senior engineers?
Because their value is tied to deep expertise in specific systems, which is exactly where AI makes the most subtle errors and requires the most supervision. Junior engineers benefit from AI in unfamiliar territory. Senior engineers have the freedom to experiment. Mid-career engineers are caught in the middle, measured on delivery and unable to afford the time to adapt.
2. Is specialization still worth investing in for software engineers in 2026?
Deep expertise is still valuable, but only when redirected toward a higher level of abstraction. The engineers who will benefit most are those who use their systems knowledge to direct and govern agents effectively, not those who use it to compete with agents at the implementation level.
3. What is mental simulation and why does it matter for agentic AI?
Mental simulation is the ability to build accurate internal models of how complex systems behave and reason about architecture and causality. It is what allows engineers to predict how an agent will fail with novel inputs, design reliable multi-agent workflows, and catch errors before they compound across a system. It is the core human differentiator that LLMs cannot replicate.
4. How long does it take to develop software engineer AI skills for working with agents?
With a structured learning path focused on systems thinking and hands-on project work, meaningful fluency is achievable within a few months of consistent effort. The key is building architectural judgment alongside tool familiarity, which is what separates practitioners who can build reliable agentic systems from those who can only demo them.