Your AI Strategy Is a Pile of Pilots

A quarterly AI council review at a mid-sized insurer. Twelve teams each get ten minutes. Claims built a document-summarization tool. Underwriting piloted automated risk scoring. Customer service tested a chatbot. Finance has a proof-of-concept for invoice matching.
Everyone presents metrics from their sandbox. Accuracy. Speed improvement. User satisfaction in the test group. Each team asks for more budget to "move to the next phase."
Nobody asks: which of these should we kill?
How the pile forms
The shape is always the same. Six months building an AI strategy. A cross-functional AI council. A phased roadmap. The board loves it. Eight months later, fourteen pilots, two in production, neither connected to the other. The document-summarization tool runs on one platform. The customer-routing agent runs on another. Both teams are asking for budget to build a third thing, unrelated to either.
MIT's 2025 State of AI in Business report found that 95% of generative AI pilots fail to deliver measurable business impact. BCG's September 2025 study of more than 1,250 companies found 60% are getting no material value from AI. Not failing spectacularly. Just running. Spending. Producing slides. Going nowhere.
Why pilots never die
Launching a pilot is visible work. It produces demos, status updates, executive presentations. Killing a pilot is invisible work. It produces nothing to present. So pilots accumulate. The "AI strategy" becomes a list of things running, with no mechanism for deciding what should stop.
The Cisco AI Readiness Index found that only 32% of organizations have a defined process to measure AI ROI. Without measurement, every pilot looks like a success in the demo and a question mark in finance. The team needs "just one more quarter" to prove value. Leadership doesn't have the numbers to say no.
Bottom-up approaches fare no better. Hackathons, innovation days, "bring us your AI use cases": the adoption numbers look great. But crowdsourced initiatives optimize for what's easy to pilot, not what moves the business. Document summaries. Meeting notes. Email drafts. Individual productivity improves. Business metrics don't move.
McKinsey's 2025 State of AI survey found that only 7% of organizations report fully scaled AI deployment. An IDC study found that for every 33 AI prototypes a company builds, only 4 make it into production. Many of the other 29 are still running. Still consuming budget. Too alive to bury, too dead to be useful.
The zombie tax
Every pilot that lingers costs more than its line item. It occupies an engineer who could be building something that matters. It takes a dashboard slot, crowding out initiatives that deserve attention. It consumes a piece of the AI budget that has to come from somewhere.
The 5% of companies BCG classifies as "future-built" kill faster. More than 60% of them systematically measure and report AI value, compared with 17% of everyone else. They defined what "working" means before they started. They redirect resources from dead pilots into the ones that compound.
The hardest conversation in a portfolio review is always the same: a team lead presenting a pilot that works in a demo and doesn't connect to anything. The technology is fine. The investment is a dead end. Making the kill decision early is what separates a strategy from a collection.
The gap is between companies that decide and companies that accumulate.
The org chart tells the story
When a company creates a separate AI function, with its own leader, its own budget, its own roadmap, it's signaling how it thinks. It thinks AI is a domain, like security or data or infrastructure.
AI is a capability that changes how every domain works. Putting it in a box guarantees it stays in the box.
A useful test: ask the CHRO, the CFO, and a business unit leader to explain the AI strategy without the CTO in the room. If only the technology leaders can describe it, AI is still a tech project.
The organizations that get this right have software strategies that account for AI. A software strategy asks "how should we build software now that AI exists?" That question produces architecture, not pilots.
What compounds
Fourteen pilots that each solve one problem in one system are fourteen separate bets with fourteen separate maintenance burdens.
A composable platform where document parsing, entity extraction, and decision automation are shared services that any team can use: that compounds. Every new workflow built on top of it is cheaper than the last. Every improvement to a shared service lifts everything above it.
You don't get this from a portfolio of disconnected experiments. You get it from someone asking, before the first pilot ships, "what will the second pilot need that the first one built?"
What a software strategy requires
Which systems are you building or rebuilding in the next two years, and how does AI change the design? What shared capabilities do you need so that AI features don't become isolated experiments?
Data architecture becomes a first-class priority. Deloitte found that only 40% of organizations rate their data management as highly prepared for AI. The companies stuck in pilot purgatory almost always share the same root cause: their data is fragmented across systems that don't talk to each other.
The platform team matters more than the AI team. The engineers who build shared services, APIs, document pipelines, identity systems: those are the people who determine whether AI capabilities compound or stay isolated. When AI has its own budget, it optimizes for its own metrics: pilots launched, models deployed, use cases identified. When it's part of the engineering budget, it optimizes for what matters: does this make the product better?
Governance gets built into the architecture, not bolted on after. Deloitte's 2026 survey found that only 21% of organizations have a mature model for AI agent governance. Meanwhile, a WalkMe survey found 78% of employees use AI tools not provided by their employer, because the approved ones can't keep up. Governance designed after deployment is governance in name only.
Where to start
Pick your three most expensive pilots. For each one, answer two questions. Does it connect to a system you're already building or maintaining? Does it create a capability another team could use?
If the answer to both is no, the pilot is a dead end. Kill it.
The pilots that pass both tests deserve your best engineer, your real budget, and a production timeline with an actual date on it.
Then look at your software roadmap. The two or three largest builds or rebuilds planned for the next year: how does the design change if AI capabilities are a first-class consideration from day one?
A year from now, what matters is whose pilots share a foundation.

Bill Sourour
Founder, Arcnovus
25 years in enterprise technology. Writes about AI strategy for CTOs.