From Hype to Flow

17 Nov 25
\
Benjamin Igna
\
16
 mins
 read

A Calm Guide to Adopting AI in Software Organizations

If you look at many engineering organizations today, AI is everywhere and nowhere at the same time. Teams have access to coding assistants, planning copilots, and cloud dashboards that promise smart insights, yet the way work moves from idea to production looks surprisingly unchanged. Releases still slip because tests are not finished, certifications are delayed, and shared services are overloaded, even though product development feels faster than ever.  

What happens in practice is that AI is mostly applied where it is easiest to try: inside individual teams, especially in development. Developers get better tools, planning sessions become a bit smoother, some documentation is easier to produce. But when all of this accelerated work hits a shared testing or certification function that still works the old way, the overall speed of delivery does not change. More work just piles up in front of the bottleneck.  

This is not a failure of the teams, but a structural issue. Responsibility is often sliced vertically: each manager is accountable for her own team or project stream, not for the end-to-end flow across all steps. If you are measured on “your” team’s output, it is rational to make your part faster, even if the system as a whole does not benefit. That is how organizations end up with highly optimized product development and a test or certification lane that is permanently overloaded and under constant pressure.  

When thinking about AI in this context, it helps to use a simple flow metaphor that I first saw among Klaus Leopolds writings. Writing a letter is not about pressing one part of the keyboard extremely fast while leaving the rest slow; it is about hitting the right keys at the right time so that the whole word appears. Speeding up the keys from A to M while the keys from N to Z stay just as slow does not help you finish the letter any sooner. In the same way, focusing AI investments on already fast development teams while leaving testing and certification untouched will not shorten lead times, it will just create more partially finished work waiting in queues.  

Use the AI You Already Have  

AI adoption does not have to start with a new platform, a big RFP, or a greenfield project. Almost every engineering organization already pays for tools that quietly ship with powerful AI capabilities:  

- Code hosts that suggest changes or explain unfamiliar code.  
- CI/CD platforms that analyze logs and highlight anomalies.  
- Work management tools that summarize threads or suggest next steps.  
- Cloud portals that embed copilots into configuration, monitoring, and operations.  

These are not separate AI products; they live where your teams already work, with your existing identity, access control, and compliance guardrails in place. Turning them on and deliberately integrating them into everyday work is far less risky and far more realistic than orchestrating a zoo of independent bots and custom agent networks from scratch.  

Think about your own stack: your developers, testers, SREs, and product people are sitting on top of tools that already include AI-enhanced search, summarization, test generation, incident analysis, report creation, and more. Most organizations use a fraction of what is available simply because nobody has taken ownership of:  

- Auditing what AI features are already present.  
- Deciding where they fit into the flow of work.  
- Teaching people when and how to use them as part of their daily practice.  

A very practical first move for an engineering manager, CTO, or CIO is: before you spend a euro on a new AI platform, list the five to ten core tools your teams use and identify which AI features are unused or only explored by a handful of enthusiasts. Your first wave of AI augmentation can come from systematically activating and adopting those capabilities.  

Only Speed Up the Bottleneck  

From a Theory of Constraints point of view, the performance of your delivery system is determined by its slowest relevant step, not by how fast the fast steps are. If testing or certification is your bottleneck, making development twice as fast does not make your product reach users twice as fast; it just ensures that the testing queue is always full and everyone around that bottleneck is under permanent stress.  

In many tech organizations right now, that is exactly the pattern:  

- Development gets AI assistance for coding and planning.
- Backlogs move faster, stories are implemented more quickly.  
- Work then waits days or weeks in front of shared QA, test environments, or certification boards that still operate in an almost pre‑AI mode.  

From the outside, it looks like the organization “invested in AI” but “didn’t get the promised speed‑up”. From the inside, it feels like testers and certifiers are constantly behind, developers are frustrated because their work is stuck, and management asks why all those copilots and smart tools have not changed the release cadence.  

Here, the keyboard metaphor becomes practical. There is no point in pressing A to M faster if N to Z cannot keep up. If testing, quality assurance, or certification is the bottleneck, that is where AI support should go first:  

- AI-assisted regression test generation and maintenance.  
- AI help in mapping requirements or risks to test cases.  
- AI-supported analysis of logs and test results to triage failures.  
- AI copilots to prepare and summarize certification artefacts.  

The job of an engineering manager or CTO in this setting is to own the system, not just their vertical slice. That means explicitly asking:  

- Where does work wait the longest?  
- Where do people feel constantly overloaded?  
- Which step determines how fast value actually reaches customers?  

Once the answer points to testing or certification, the right move is not “more AI for development”, but “AI and process attention directly at that constraint”.  

Data, or It Did Not Happen  

Even the best AI capabilities are useless if they cannot see the work. For AI to truly augment teams, the work they do and the artefacts they produce must be digital, secure, and accessible to the tools.  

If requirements live in hallway conversations and private chats, certifications in PDFs on someone’s laptop, test results in scattered spreadsheets, and decisions in email threads, AI has almost nothing reliable to work with. That is when you see impressive demos in isolated areas but very little sustainable improvement in the real flow of work.  

Treating AI seriously means treating information architecture seriously. That includes:  

- Deciding where work items live and sticking to it.  
- Recording test results and certification outcomes in a structured, queryable way.  
- Making sure permissions and access models allow AI features to see what they need to see without opening the floodgates on sensitive data.  
- Reducing the number of places where critical information can hide.  

A simple heuristic is: if you cannot put it on a board or pull it with a query, you are not ready for meaningful AI augmentation in that area. The first job is to make the work and its context visible and digital, then AI can help summarize, connect, and accelerate.  

When testing and certification are the constraint, this might look like:  

- Standardizing how test cases, runs, and results are stored.  
- Capturing certification evidence and decisions in a consistent system of record.  
- Linking failures and defects back to tests and requirements.  

Once that foundation is in place, AI can genuinely support people: generating regression tests from past incidents, surfacing similar change requests and their outcomes, or preparing summaries that help certifiers make informed decisions faster.  

A Simple Pattern: Digitalize → See Flow → Augment People  

Putting these rules together leads to a simple, repeatable pattern you can use in one value stream before rolling anything out more widely:  

1. **Map the flow of work** for one product or service, from idea to release, including all test and certification steps.  
2. **Make the work visible and digital** at each step, especially around testing and certification: tickets, artefacts, decisions, evidence.  
3. **Identify the bottleneck** by looking at where work waits the longest and where people are consistently overloaded.  
4. **Turn on and configure the AI features** closest to those steps in the tools you already use. Focus on augmentation for the people at the bottleneck, not on building a new AI product.  
5. **Support the people in that role** to integrate these AI features into their everyday work—pairing, short training sessions, and explicit expectations help a lot more than sending a link to a new feature.  
6. **Measure a handful of metrics** such as lead time through test/certification, the amount of work waiting in front of the bottleneck, and key quality indicators before and after.  

This is not glamorous, but it is predictable. You learn how AI behaves in your context, with your data and your constraints, using tools you already know how to operate. And you build credibility with your teams because they see that AI is there to help them with their real pain points, not to create additional dashboards and side projects.  

What Not To Do: The DIY AI Trap  

There is a strong temptation, especially in technically capable organizations, to jump straight into building custom AI platforms, elaborate internal tools, or networks of agents and automations. The promise is attractive: a solution that fits your context perfectly and can be proudly presented as “our AI platform”.  

The reality, in most cases, is a tooling and administration hell:  

- Multiple parallel AI tools, each with its own configuration, security story, and maintenance overhead.  
- Internal bots or agents that are exciting for a small group of creators but never quite find a stable place in the everyday flow of work.  
- More attention on orchestrating models, prompts, and infrastructures than on improving the actual value stream.  

If your testing and certification still run on email, spreadsheets, and hallway conversations, an internal network of AI agents will not save you; it will only add a new layer of complexity on top of an already fragile system. For most organizations, most of the value will come from using the AI built into their existing platforms in a focused way, not from building yet another AI platform from scratch.  

There are contexts where custom tooling makes sense—typically when you have already:  

- Stabilized your basic flow.  
- Understood your main bottlenecks.  
- Exhausted the easy wins from existing tools.  

But for the majority of engineering organizations right now, that is not the starting point.  

This view is intentionally conservative. It does not promise a ten‑fold productivity boost or a fully autonomous software factory. Instead, it suggests three grounded moves for engineering managers, CTOs, and CIOs:  

- Use the AI capabilities you already pay for.  
- Aim them at the real bottlenecks in your value stream, often testing and certification.  
- Make sure the work and data in that area are digital, secure, and accessible so AI can truly augment your people.  

Your experience might look different, and that is where this article should continue—in the comments.  

- Where has AI actually helped your teams deliver better or faster, and where did it just create more work?  
- Have you seen AI meaningfully transform testing or certification, or is your constraint somewhere else?  
- Did you manage to make custom agents or internal platforms work without ending up in tooling and administration hell?  

Not Sure Where to Start?

Warp Speed Workshop

In this one-off interactive, gamified workshop, we’ll simulate real-world work scenarios at your organisation via a board game, helping you identify and eliminate bottlenecks, inefficient processes, and unhelpful feedback loops.

Close Cookie Popup
Cookie Preferences
By clicking “Accept All”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts as outlined in our privacy policy.
Strictly Necessary (Always Active)
Cookies required to enable basic website functionality.
Cookies helping us understand how this website performs, how visitors interact with the site, and whether there may be technical issues.
Cookies used to deliver advertising that is more relevant to you and your interests.
Cookies allowing the website to remember choices you make (such as your user name, language, or the region you are in).