When AI Raises the Average, Who Protects the Breakthrough?

Artificial intelligence is superb at optimisation. But transformation still comes from the deviations leaders choose to back.

Debleena Majumdar

[AI can lift the median. Breakthroughs still come from the deviations leaders choose to back.]

By Debleena Majumdar and Arjo Basu

In the summer of 1854, cholera tore through London. Entire neighbourhoods were struck within weeks. As mortality rose sharply, panic followed. The dominant explanation for the outbreak, supported by the data and medical consensus of the time, was miasma: disease spreading through bad air.

Yet people kept dying.

At that time, John Snow, a physician in London, was treating patients and had a different hypothesis. Cholera, he argued, was not spreading through air but through contaminated water. His evidence was modest by any modern standard: a simple hand-drawn map showing cholera deaths clustered around a single water pump on Broad Street. When local authorities removed the pump handle, infections fell.

Public health would eventually be reorganised around water sanitation, but that validation came much later. Epidemiology did not yet exist as a formal discipline. At the moment of decision, Snow’s insight was radically different from prevailing belief—and yet compelling enough to act upon.

AI Is Great at Improvement. But That’s Not the Same as Change

Fast forward to today and the rapid adoption of artificial intelligence in organisations. Over the past few years, especially with generative AI, adoption has accelerated dramatically, driven by expectations of higher productivity, faster execution, and better performance.

But an important question now emerges. How is AI being used in practice? Where does it genuinely add value through incremental improvement—automating tasks and workflows—and where is it being implicitly expected to drive strategic or transformational change?

The risks and rewards in these two uses are very different.

Why AI Pulls Us Toward the Middle

This is where the distinction between story and statistic becomes important.

Generative AI learns from historical data and optimises around statistical averages. This makes it exceptionally good at improving what already exists. It also means that, left to itself, it tends to pull decisions toward the middle rather than away from it. It smooths variation, elevates what is most common, and treats unexplained deviation as noise.

AI can certainly produce surprising recombinations, but it cannot reliably take responsibility for deviation—or bear the cost of being wrong.

Used well, it creates enormous value in incremental innovation and productivity improvement.

Original insight, however, rarely sits neatly at the centre of a distribution. It often emerges at the edges, bridging gaps across what data cannot yet explain.

Story, in this sense, is the leap leaders take before the data can fully justify it.

Breakthroughs Come From the Edges

For centuries, hand-copied manuscripts for elites were considered a stable and proven system for reading and knowledge dissemination. There was little evidence that widespread access to information would improve societies. When the printing press emerged, it was feared as destabilising and unproven.

Leaders who embraced it acted on a belief about how knowledge should circulate. Modern science, mass education, and democratic institutions followed much later.

A more recent example is Amazon Web Services. When AWS began, enterprise data strongly favoured owning servers and managing IT in-house. The numbers supported incremental optimisation of existing enterprise software approaches. Centralising computing as a shared utility looked risky and unnecessary.

Amazon pursued it anyway. A new category emerged, but only after years of uncertainty and scepticism.

In both cases, the insight was unlikely and unusual for the time. And that is what led to transformation.

It is these transformational decisions—not incremental improvements—that create enduring advantage for organisations.

The Quiet Risk: Everything Starts to Look the Same

As organisations begin using AI not just for execution but for more ambitious activities—innovation planning, strategy, and deciding what to bet on next—a subtle shift occurs in how value is created.

Ideas that resemble the past gain credibility faster. Median performance improves, but the room for non-consensus bets narrows as decisions increasingly converge toward what looks statistically defensible.

Truly original ideas, when outsourced too early to the machine, risk being absorbed into a massive system that rewards what is already common.

Over time, this has the danger of creating organisations that are highly efficient at refinement and increasingly cautious about reinvention—since transformational innovation, by its very nature, requires breaking patterns, not reinforcing them.

Two underlying lessons follow.

First, this is not a failure of AI systems, which will continue to improve. It is the predictable outcome of applying an averaging system to a task that depends on backing the unusual.

Second, while bold deviations have produced some of the most powerful transformations in history, many deviations also fail. That reality makes leadership judgment more important, not less.

What Leaders Must Do Deliberately

In practical terms, organisations need to separate incremental innovation from transformational innovation explicitly. AI should dominate the former, while humans must lead the latter. Mixing them into a single decision pipeline almost guarantees convergence.

Leaders also need to decide where AI informs decisions and where it must be constrained. Using AI to improve performance within an existing way of operating is fundamentally different from using it to decide when that way of operating itself needs to change.

Most importantly, organisations must protect deviation deliberately.

This can mean reserving capital for ideas that cannot yet be justified by data. It can mean designing strategy forums where anomalous signals are examined rather than averaged away. It can also mean governance processes where some bets are evaluated on a clear story about where the world is going—not only projected returns.

These practices do not guarantee success, but they can protect against premature convergence.

Knowing When to Step Outside the Model

John Snow’s hand-drawn map captures something essential about deep, original insight. It relied less on complete data and more on connecting patterns others dismissed. That capacity remains, at least for the foreseeable future, innately human.

As leaders increasingly use AI to raise incremental performance and optimise execution, the real test of transformation will depend on knowing when to trust the numbers—and when to step outside them.

Optimisation will scale everywhere. Transformation will still require leaders willing to step outside the model.

That judgment cannot be automated.

Was this article useful? Sign up for our daily newsletter below

Comments

Login to comment

About the author

Debleena Majumdar
Debleena Majumdar

Entrepreneur, business leader

and author

Debleena Majumdar is an entrepreneur, business leader and author who works at the intersection of narrative, numbers, and AI. She believes that in a world where AI can generate infinite content, the differentiator is not volume, it’s meaning: the ability to connect strategy to a coherent story people can trust, follow, and act on.

She is the co-founder of stotio, an AI-powered Narrative OS built to help businesses distil strategy into connected and clear growth narratives across moments that shape outcomes be it fundraising, sales, brand evolution, and leadership reviews. stotio blends structured storytelling frameworks with a context-driven intelligence layer, so organizations build narrative consistency across stakeholders and decisions.

Debleena’s foundation is deeply rooted in finance and investing. Over more than a decade, she worked across investment banking, investment management, and venture capital, with experience spanning firms such as GE, JP Morgan, Prudential, BRIDGEi2i Analytics Solutions, Fidelity, and Unitus Ventures. That grounding in capital and decision-making continues to shape her work today: she is drawn to the point where metrics end and decisions begin and where leaders must translate complexity into conviction.

Alongside business, Debleena has been a published author, with multiple fiction and non-fiction books. She contributed data-driven business articles, including contributions to The Economic Times over several years. She loves singing and often creates her own lyrics when she forgets the real ones. Humour is her forever panacea.

Across roles and mediums, her learning has been to use narrative with numbers, as a clear strategic tool that makes decisions clearer, communication sharper, and growth more aligned.

Also by me

You might also like