← All Insights
Strategy

The AI Succeeded. That Was the Problem.

March 16, 2026Aarohi Sen

In the space of three months, three different teams inside the same company each launched an AI agent. Each got its success story in the internal newsletter. Each was framed as proof that the organisation was delivering on its promise of “AI-first”.

The first automated a chunk of the research workflow. The second handled document classification for a specific client vertical. The third sat on top of a proprietary dataset and answered natural-language queries about historical project performance.

Each one was genuinely good at what it did. Adoption metrics climbed week on week. From a distance, it looked like exactly what success was supposed to look like.

Then a fourth team came to procurement. They needed a different agent, one trained on a dataset none of the existing three could cover. Procurement asked the reasonable question: why not use one of the agents we already have?

The fourth team pushed back. Their use case was different. The data requirements were specific. The existing agents didn’t have the context. Weeks of back-and-forth with budget holders followed. Eventually, the team either won the argument or agreed to pilot one of the existing agents with the enthusiasm of someone told to wear somebody else’s shoes.

That pilot was never going to work. Everyone involved knew it. And everyone involved could later point to the adoption dashboard as proof they had tried.

Meanwhile, the three original teams kept building their case. They ran training sessions for adjacent teams and reported attendance as adoption. They canvassed other departments to join their pilot at no cost. Coalition-building dressed as generosity. In at least one team, members were encouraged to post their training completion certificates on LinkedIn. The certificates proved that people had been trained. They proved nothing about whether anyone had changed how they worked.


Think about the wisest person you have worked with. Not the most technically skilled. The wisest. The person who could walk into any room, a board meeting, a client deep-dive, a product review, and within minutes say something that shifted the conversation. That kind of intelligence is not specialist. It draws on everything the person has seen, across every context, and synthesises it in real time.

Now look at the agents. Each one was excellent within its boundary. Fast, accurate, genuinely useful. But none of them had any awareness of what the others knew. The research agent had no access to the project performance data. The document classifier knew nothing about the client vertical the research agent was trained on. Three brilliant specialists who could not hold a conversation with each other.

The obvious answer is an orchestration layer. A master agent that coordinates the specialists, routes queries across them, synthesises their outputs. Every AI architect knows this. The question is who builds it. No individual team had the incentive. The orchestrator’s value cannot be attributed to any single team’s performance target. It is, by definition, a cross-functional capability. And cross-functional capabilities do not survive an incentive structure that rewards teams for the adoption of their own agent, not for the intelligence of the whole.


The declared priority was AI-first, an integrated capability that would change how the company operated and served its clients. What got built was three isolated tools, each shaped by a different performance target, each unable to see past the boundary of its team’s incentive structure.

This is Priority Signal Divergence. Not a contradiction anyone planned. A gap between what the organisation declared and what the incentive architecture produced. Made legible, in this case, in the AI itself.

You do not need to survey your leadership team to find this gap. You can read it in the agents they build.

It is the cost that does not show up on any dashboard. It is not the duplicated vendor spend or the overlapping agent licences. Those surface eventually.

The cost is what was never built.

That is what Priority Signal Divergence costs. Not the money you spent twice. The value you were never able to create because the priority architecture made it structurally impossible.


But the divergence is not abstract. It has a structure, and that structure is measurable. Three sub-metrics define it.

The first is Calendar Attention Ratio. Take an organisation’s stated strategic priorities, the ones the CEO presents to the board. Rank them. Then categorise the leadership team’s calendars over a four-month period by which priority each meeting serves. Rank the priorities again, this time by the percentage of meeting time each one actually received. Compare the two rankings. In this company, AI-first was priority number one on the strategy slide. In the leadership team’s calendars, it sat at number three. The gap between those two rankings is not a scheduling problem. It is a behavioural record of what the leadership team considered important enough to sit in a room about.

The second is Budget Velocity Ratio. For each stated priority, measure the median number of days between a discretionary spend request and its approval. The faster the money moves, the higher the priority sits in practice. Each of the three original agent teams got budget approval in days. The cross-functional orchestration layer, the capability the strategy actually required, could not even find a budget owner. Budget Velocity Ratio does not measure how much money is allocated. Allocation is what the board approves. Velocity is what the organisation treats as urgent.

The third is Response Latency Ratio. For each strategic priority, calculate the median response time to internal communications tagged to that priority. An email about one team’s agent got a reply within hours. An email about the coordination problem between all three sat for days. Nobody was ignoring it deliberately. They were responding fastest to whatever their own performance target made most urgent. Aggregate that pattern across the leadership team and you have a behavioural map of which priorities are real and which are decorative.

The raw data to measure all three already exists inside most large organisations. Microsoft Viva Insights tracks meeting hours and email response patterns across calendars and inboxes. Process mining platforms like Celonis sit on top of SAP and Oracle, measuring procurement approval cycle times down to the hour. Slack launched Agent-Ready APIs in late 2025, opening conversational metadata to AI agents for the first time. SAP Signavio now offers AI Agent Mining, designed to track whether an organisation’s own AI agents are behaving as intended. Each of these platforms measures a piece of the picture. None of them joins the dots to strategy. They can tell you how long an approval took or how many hours were spent in meetings. They cannot tell you whether any of it was pointed at the priorities the organisation said mattered most.

The principle behind all three is not new. Bandiera, Prat, Hansen and Sadun, in a 2020 study published in the Journal of Political Economy, tracked how 1,114 CEOs across six countries spent their time. In high-performing firms, the correlation between stated priorities and actual time allocation was strong. In firms with documented performance problems, it collapsed. The declared number one priority was sitting at three or four in actual behaviour. The executives in those firms were not lying about their priorities. They were simply organised, behaviourally, around a different set than the ones they articulated.

Three teams built three agents. Each one was fast, accurate, and genuinely useful. And each one, without anyone intending it, became a faithful record of the gap between the company’s stated priorities and the ones it actually funded, staffed, and rewarded.

The AI did not fail. It succeeded so completely that it made the divergence visible to anyone who knew where to look. Meanwhile, the adoption dashboard said everything was fine.

Itúy is a private war-room for leaders navigating high-stakes moves and board-level strategy.