ELC

AI Didn't Make My Team 10x. It Kept Them Standing.

AI Didn't Make My Team 10x. It Kept Them Standing.
# AI Productivity Metrics
# AI
# Engineering Leadership

14 months of git data from a real team using Claude Code.

April 1, 2026
Jason Meltzer
Jason Meltzer
AI Didn't Make My Team 10x. It Kept Them Standing.
I started in my current VP of Engineering role in January 2026. Within a few weeks, I knew I had a problem. One developer was doing more work than anyone else combined. Within a couple of months, he'd be gone.
I needed to understand what I was working with. How productive was this team, really? Could they absorb the loss? Claude Code had been running for four months, and everyone had opinions about whether it was working. I pulled 14 months of git history and counted what actually happened.

What I walked into

Eighteen different people had committed code over the previous 14 months. Most of them were gone by the time I showed up. Tenure ranged from just shy of a year to brand new. The newest developer started the same day I did.
You're not always handing AI tools to seasoned engineers who know the system. You hand them to people still figuring out where things live. You hand them to people whose job it is to understand code written by someone who quit six months ago.
That was my team. High turnover, short tenure, a codebase written by people who weren't here anymore. And one developer holding a disproportionate amount of it together.

The aggregate numbers didn't help

I pulled every merged PR and every confirmed bug from our issue tracker across two periods: eight months before Claude Code adoption, five months after.
Per-developer PR output was around 12 per month before Claude Code. After adoption, it stayed roughly the same. Confirmed bugs went from 17 per month to 12. The bug rate improved from 23% to 19%. Median cycle time from first commit to merged PR stayed at about six hours all year. When headcount fell, total output fell too. You'd say the tool didn't do much.

One developer changed my read

But one developer showed what was actually happening.
Before adoption, he averaged 57 lines per commit. After: 227. By January, he was at 616. His PR volume barely moved, from about 17 per month to 20.
He wasn't shipping more features. He was shipping different work. Bigger changes. Things that had been sitting in the backlog because they were too risky for anyone else to touch. The tool changed what he was willing to take on.
In November 2025, he authored 50% of all merged PRs. One person producing half of the team's output. That's a single point of failure, and I was looking at it knowing he'd be gone soon.

What happened after

By the time he left, no single developer accounted for more than 26% of merged PRs, split across eight people. The rest of the team picked up code they didn't write and kept shipping.
The onboarding data told me why. I compared first-month output for developers who onboarded before AI adoption versus after. Pre-AI, new developers averaged 16 to 62 lines per commit in their first month. Post-AI: 111 to 134.
A new developer with Claude Code and your codebase in context can produce working code in an unfamiliar system at a different scale than one without it. Their first month looks like full participation, not tentative exploration.
The team could absorb the loss because AI tools shortened the ramp-up. People who'd been here three months could pick up unfamiliar code and contribute meaningfully, not because they understood the whole system, but because the tooling bridged the gap fast enough to keep things moving.
The team survived because AI let people work in code they didn't fully understand. That's also the part that keeps me up at night.

The knowledge gap

In August 2025, that top developer made 63 commits and added 2,524 lines. He was working in small increments, touching 136 files. He understood every line he wrote.
In January 2026, he made 15 commits and added 9,237 lines. Each commit averaged 616 lines. Did he understand those 9,237 lines the way he understood those 2,524? I don't think so.
When bugs came in, tracing through code he'd written with AI was slower than tracing through code he'd written by hand. He'd produced it, but that didn't mean he could trace through it under pressure.
AI tools create a gap between what your team produces and what your team understands. Your git log says you shipped over 100,000 lines of code this year. What your team actually comprehends is some fraction of that. Nobody's measuring it.
A new developer hitting 134 lines per commit in month one sounds great. But faster output doesn't mean faster understanding. I'm still figuring out how to close that gap.

What I took from this

PR counts and total commit volume don't tell you much by themselves. You need to normalize for headcount and track output per developer. When I searched GitHub for "bug" and "fix," I only caught 10% of actual defects. Build your metrics before you hand anyone a new tool.
Watch for concentration risk. If your best AI user produces half the team's output, that's not productivity, that's dependency. One person leaving shouldn't almost kill you.
My team's problem was specific: developers working in codebases written by people who'd left, and knowledge disappearing with every departure. The survival question was whether everyone else could absorb that loss and keep moving. They did.
We didn't get 10x. We got a team that could lose the people who built the system and still ship.

Jason Meltzer is VP of Engineering at LeadSimple. He has spent over 15 years leading engineering teams from startups to enterprise, including GoDaddy, American Express, and Choice Hotels. He writes about engineering leadership at itsnotthecode.substack.com.
Comments (0)
Popular
avatar

Dive in

Related

Video
Why Engineering Metrics Fail — and How Leaders Can Make Them Stick
Mar 11th, 2026 Views 88
Video
Building Your AI A-Team by Anna Patterson, Adrien Treuille
By Anna Patterson • Oct 27th, 2020 Views 1.1K
Video
Generate actually working code with AI (10X improvement)
By Artem Golubev • Oct 1st, 2025 Views 23
Video
When Your Team Breaks: The Messy Growth Stage of a Startup and How to Navigate It
By Lawrence Bruhmuller • Oct 9th, 2023 Views 672
Video
Generate actually working code with AI (10X improvement)
By Artem Golubev • Oct 1st, 2025 Views 23
Video
When Your Team Breaks: The Messy Growth Stage of a Startup and How to Navigate It
By Lawrence Bruhmuller • Oct 9th, 2023 Views 672
Video
Building Your AI A-Team by Anna Patterson, Adrien Treuille
By Anna Patterson • Oct 27th, 2020 Views 1.1K
Terms of Service