AI in Software Development: What the Latest Data Actually Shows

18 Feb 26
\
Benjamin Igna
\
18
 mins
 read

Last week, I attended the Pragmatic Engineer Summit in San Francisco. The quality of talks was exceptional, but one in particular stopped me in my tracks: Laura Tacho's presentation on AI in software development. What she shared was so grounded, data-rich, and counter to the usual hype that I had to synthesize my takeaways into this post.

There's a peculiar tension in software development right now. Talk to any engineering leader, and you'll hear two contradictory stories about AI coding tools.

Some will tell you they're seeing twice as many customer-facing bugs and outages. Others report incidents have dropped by 50%. Both are happening simultaneously, across hundreds of organizations. The difference isn't the technology, it's everything else.

Here's what we actually know about AI in software development, based on fresh benchmark data from over 121,000 developers across 450+ companies.

The adoption story: nearly universal, barely transformative

Let's start with adoption. As of early 2026, 92.6% of developers use an AI coding assistant tools like Cursor, GitHub Copilot, or Claude at least once a month. About 75% use one weekly.

That's remarkable penetration for technology that barely existed in its current form three years ago.

But here's where it gets interesting: while adoption is near-universal, actual organizational transformation is remarkably rare. A July 2025 MIT study of 152 organizations found that over 90% had adopted AI tools, but very few had fundamentally changed how they work because of them.

Most companies are stuck at what you might call "individual productivity mode" developers using AI to autocomplete functions or generate boilerplate code. That's useful, but it's not transformation. It's like buying everyone in your company a bicycle but never building bike lanes.

The barriers to transformation? They're not technical. They're the usual suspects: poor change management, executives who talk about AI but have never actually used the tools, and fuzzy expectations about what AI can and should do.

The productivity numbers: modest but real

Developers report saving about 4 hours per week through AI tools roughly a 10% productivity boost. That number has held steady over recent quarters and aligns with findings from Google's internal research.

Not exactly the "10x developer" revolution some vendors promised, but 10% compound gains across an engineering organization add up quickly.

The more striking metric is AI-authored code code written by AI that makes it through code review and ships to production without major human rewrites. Across 42,600 developers tracked, that number sits at 26.9%, up from 22% last quarter.

For daily AI users, the percentage has crossed 30%. Nearly a third of code reaching customers is being written by AI.

Let that sink in for a moment. In many organizations, AI is now authoring more code than junior developers.

Where AI actually shines: onboarding

One area where the impact is unambiguous: onboarding new developers.

Research from Microsoft found that how quickly someone becomes productive during onboarding affects their performance for their first two years at the company. If AI can help new developers understand a complex codebase faster, ask better questions, and start contributing sooner, the benefits compound dramatically over time.

This makes intuitive sense. AI coding tools excel at explaining existing code, suggesting patterns consistent with the codebase, and reducing the cognitive load of learning a new system. For experienced developers, that's convenient. For new hires trying to decode millions of lines of unfamiliar code, it's transformative.

The rise of agents: from autocomplete to autonomy

The newest development is agentic workflows AI that doesn't just respond to prompts but works through multi-step tasks autonomously.

Think of it this way: traditional AI coding tools are like having a very fast typist who anticipates what you want to write. Agentic workflows are like delegating an entire task "migrate this module to the new framework"—and having the AI figure out the steps, make decisions, and come back when it's done.

Early data from companies tracking these workflows shows about 80% of developers using them weekly, with over 50% using them daily. OpenAI's Codex has seen over 1 million downloads since its desktop app launched in early February, with a 60% user increase in a single week.

Internally at OpenAI, 95% of developers use Codex, and they submit about 60% more pull requests per week compared to colleagues using other AI tools.

These aren't just vanity metrics. Organizations are using agentic workflows for real work: converting Figma designs into functioning prototypes, handling complex code migrations across thousands of files, and building internal tooling that would previously have required dedicated teams.

Real-world applications: beyond the hype

Aven, a San Francisco-based pain and migraine center, is using agentic workflows to rapidly develop patient-facing software. They take design mockups, convert them into requirements using AI, feed those into agentic loops, and get genuinely usable software not prototype junk, but production-grade applications.

Cisco has 18,000 engineers working with AI tools daily, using them for complex code migrations and code review. They're building systems where agents categorize code interactions and other agents validate the output.

A large enterprise manufacturing company is using AI-powered tools to build internal developer portals and dramatically accelerate how quickly new engineers become productive.

These aren't science experiments. They're production systems solving real business problems.

Why most organizations still aren't seeing results

Here's the uncomfortable truth: AI cannot fix broken organizations.

If your engineering team struggles with unclear requirements, constant context switching, poor meeting culture, or confusing processes, AI will just make you ship broken things faster. The 4 hours of weekly time savings can't compensate for systemic dysfunction.

As Martin Fowler and Kent Beck concluded after an in-depth analysis: organizations are held back by human and systemic problems, and any technology—including AI—can only help if you apply it to those system-level problems. Which means you first have to acknowledge those problems exist.

You can't colonize Mars if you haven't solved pollution, waste, and traffic on Earth. The problems come with you.

What winning organizations do differently

The organizations actually benefiting from AI share several patterns:

They set specific goals and measure progress. "Spray and pray" giving every developer a license and hoping for the best consistently fails. Successful organizations point their AI experimentation at concrete problems and track whether they're making progress.

They connect AI adoption to organizational outcomes. It's not enough to track how many people are using the tools. The question is: are we shipping faster? Building higher quality products? Improving developer satisfaction? And critically at what cost?

They prioritize developer experience. The smartest organizations use AI to tackle systemic issues: reducing context switching, improving code review quality, smoothing onboarding, eliminating toil. They understand that AI is a tool for addressing organizational problems, not a replacement for good management.

They experiment by solving customer problems. Exploration is exciting, but sustainable AI adoption comes from channeling that experimentation into work that actually serves customers and moves the business forward.

They use readiness frameworks. Tools like the DORA AI model help organizations assess whether they're actually set up to benefit from AI-looking beyond adoption metrics to the practices and culture that correlate with good outcomes.

The takeaway: grounded optimism

We're absolutely in an era of genuine technological possibility. AI coding tools are powerful, improving rapidly, and already changing how software gets built. The sense of wonder is real.

But we also live in reality with real budgets, real teams, real customers, and real organizational challenges that no technology will solve automatically.

The organizations pulling ahead aren't the ones with the fanciest AI tools or the biggest budgets. They're the ones who've done the hard, unglamorous work of understanding their actual problems, building systems to measure what matters, and creating environments where technology can amplify human capabilities rather than paper over organizational dysfunction.

The future isn't about whether AI will transform software development. It already is. The question is whether your organization will be one that benefits or one that carries all its old problems into a new era, wondering why the tools aren't working.

Based on industry research from DX and data collected from 121,000 developers across 450+ companies between November 2025 and February 2026

Laura Tacho at The Pragmatic Summit. Watch the session

Not Sure Where to Start?

Warp Speed Workshop

In this one-off interactive, gamified workshop, we’ll simulate real-world work scenarios at your organisation via a board game, helping you identify and eliminate bottlenecks, inefficient processes, and unhelpful feedback loops.

Close Cookie Popup
Cookie Preferences
By clicking “Accept All”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts as outlined in our privacy policy.
Strictly Necessary (Always Active)
Cookies required to enable basic website functionality.
Cookies helping us understand how this website performs, how visitors interact with the site, and whether there may be technical issues.
Cookies used to deliver advertising that is more relevant to you and your interests.
Cookies allowing the website to remember choices you make (such as your user name, language, or the region you are in).