Nuance is the new differentiator

When every team has access to the same AI tools, your advantage will be in knowing what wasn’t said

Shrinath V

[Photo from Unsplash]

“Hum logo ko samajh sako to samjho dilbar jaani.

Jitna bhi tum samjhoge, utni hogi hairani.”

(If you can understand us, try.

The more you understand, the more you’ll be surprised.)

Phir Bhi Dil Hai Hindustani (2000)

That line is a sharp piece of commentary on human behaviour, written by Javed Akhtar as an ode to understanding the Indian psyche. It captures what makes understanding people so difficult, especially when we try to do it at scale.

Most research, especially in tech companies, narrows its focus to the point of usability. We study what users click, what they skip, where they drop off. We build templatized user persona. Behaviour is flattened into metrics, and context becomes optional.

But people don’t live in clean funnels. Their actions are shaped by memory, habit, and cultural precedent, much of which sits outside the interface. In trying to build for India, these broader invisible layers often matter more than what is measurable.

What data doesn’t show

A few years ago, I was part of a roundtable at the NASSCOM Product Conclave. It was a small group, mostly senior leaders from product and business teams, in a room that encouraged candour more than theatre. At some point, the conversation turned to payment behaviours.

A leader from Amazon spoke about Cash on Delivery (COD). Despite multiple attempts to encourage digital payments—cashback offers, better refund systems, simplified checkout—a significant segment of users continued to prefer paying at the end. Delivery first, payment later.

This caught the attention of a leader from Redbus, who said the opposite was true for them. Most users paid upfront; very few waited. It wasn’t just anecdotal. Their internal numbers supported the claim.

The contrast was striking. Both companies had built digital platforms used by millions. Both were drawing on real data. Yet the patterns didn’t align, and neither could fully explain why.

I suggested a possibility: maybe the difference had less to do with product design, and more to do with the kinds of offline behaviours users were transferring to an online context.

When you walk into a store, you inspect multiple products, choose one or more, and then pay. You know that once the payment is made, the risk lies with you. So you want to be certain that you will not regret your choice. That seemed to be the logic Amazon users were applying. Cash on Delivery wasn’t rooted in distrust of the company or its policies. It was something more instinctive—a pattern absorbed over years of shopping in physical stores. There, the transaction doesn’t begin with payment; it ends with it. You inspect the item, check for damage, maybe even try it out if possible. Only after you’re satisfied does the money change hands.

Online, that physical verification isn’t possible. But the need for reassurance hasn’t gone away. COD offers a surrogate—a way to preserve the familiar order of things. Receive the item first, hold it in your hands, and then pay. The transaction doesn’t feel complete until the product arrives. Payment before delivery feels like a leap. COD restores a sense of control. 

But buying a bus ticket is different. At a bus station, you pay to reserve your seat. Delay, and you risk missing out. That was the pattern Redbus users were responding to, shaped not by trust but by scarcity. Pay early, then wait.

Both behaviours made sense. But neither could be understood through company data alone. The logic driving these decisions was outside the frame.

The hidden cost of desire for clarity

That exchange stayed with me. It pointed to a blind spot that shows up often, not just in product work, but in research, leadership, and decision-making more broadly. Relying on data alone—or even just on what people say—can leave out a significant part of what shapes human behaviour.

With AI tools now entering the research process, that gap may quietly widen.

To be clear, I’m not against the use of AI in parsing data and research for insights. The potential here is immense. Today, only a small fraction of research ever reaches the people making decisions. Most of it is filtered—summarised into neat slides for senior management with some background info in Appendix, often more to showcase credibility of findings. This isn’t because researchers are careless. It’s because most leaders haven’t been trained to work with ambiguity, and few have the time to engage with the raw material. So the defaults take over: simplify, compress, reduce to pattern.

This is where LLMs could offer something new. They can surface what might otherwise remain hidden. They can process, translate, and cluster information at speeds no human team can match. For example, a team could instantly compare how users in Jeypore relate to packaging in Odia vs those in Tirupur reading it in Tamil. They could filter feedback across surveys, interviews, and chat transcripts to identify how first-time shoppers articulate risk, or how migrant workers describe trust in payments, not through ratings but through phrasing. Language, tone, and metaphor can all be analysed at scale, offering a more layered view of how people speak and react.

It’s a real opportunity. Richer access, faster turnarounds, wider reach.

But it comes with its own risks.

The more efficiently insight is surfaced, the easier it becomes to treat it as complete. We start trusting summaries as stand-ins for understanding. But not all research is verbal. Not all insight comes from what is said. Some of it sits in what people allude to indirectly, or how they shift their language mid-sentence, or what they leave out altogether. These cues often don’t survive compression. 

Not every insight comes with a confidence score. Some of the most meaningful ones arrive as hesitation, silence, or a shift in tone. This is where intuition matters. The ability to notice what’s missing, to sit with ambiguity, and to hear what isn’t being said.

I saw this firsthand during the pandemic, while coaching the product team of an e-commerce company that had seen a sharp rise in sellers from smaller towns. The team wanted to understand how to support this segment better and asked for help structuring their research.

We began by putting together a qualitative research plan. The team had come in with a set of questions focused on product-level friction: were sellers facing issues during onboarding, where were they dropping off, what features were hard to use? Together, we expanded the scope. Instead of staying within the boundaries of product usability, we widened the lens to include context—what kind of businesses these sellers came from, who was helping them, how they thought about selling online, and how that fit into their broader identity as business owners.

The team ran the research themselves. They spoke to sellers over phone calls, took notes, and shared both their summaries and the recordings in a synthesis session. Initially, they concluded that the onboarding journey wasn’t the real issue. Most sellers weren’t handling setup themselves. They had agencies or clerical staff managing the operational aspects—uploads, listings, responses. So the team wondered whether they were focusing on the wrong user.

That’s when I stepped in and suggested we pause. We went back and listened carefully. What stood out wasn’t confusion or lack of access. It was hesitation.

These weren’t first-time business owners. They were second-generation entrepreneurs—the sons or daughters of men who had built local, thriving businesses before the pandemic. They had grown up around the business, often helping out from a young age. Inside the organisation, they were familiar faces, affectionately called Baba or Munni. It looked like a cohesive unit, one that would support them as they took the business online.

But behind that familiarity sat a set of quiet insecurities.

Online was unfamiliar ground. There was no precedent. They knew they would inherit the family business someday; that much was expected. But they wanted to show that they had earned that role, not just inherited it. Taking the business online was their way of signalling initiative, of bringing in fresh ideas. But it also carried risk. What if it failed? What if they were cheated? What if they invested and no one bought?

The hesitation wasn’t about how to use the platform. It was about what going online represented.

At the heart of it, this wasn’t a usability issue. It was about identity. About proving they belonged—not just as heirs, but as contributors.

Once we framed it that way, the pattern became clear. This wasn’t a product problem. It was a marketing one.

The insight gave the team a different lens to look at the problem. Instead of trying to solve the hesitation through additional features or onboarding tweaks, could they support these sellers more holistically in their first 90 days?

That meant building marketing collateral—not just for outreach, but to build confidence. Stories of other second-generation entrepreneurs who had made the shift to help reassure them that they weren’t alone. Support structures to respond to doubts and slow starts. And equally important, providing visible signals of progress: badges, milestones, shareable formats that sellers could show their families—and themselves—that they were doing well, that this new step wasn’t just an experiment, but a direction worth taking seriously.

The core problem hadn’t been buried in data. It had been right there, in the tone of the conversations, in what was not being said. But to see it, the team had to move beyond synthesis and into context. They had to shift from interpreting behaviour to understanding position—not just what the seller was doing, but who they were trying to become.

This shift was only possible because the leadership had the patience and foresight to walk the journey. They didn’t rush to simplify the problem or push for immediate fixes. They allowed the team to stay with the discomfort of not knowing, and made space for the ambiguity to speak.

Not all teams work this way.

In most organisations I’ve seen, the pull toward clarity is strong. Quantitative data offers clean edges. Charts and funnels feel conclusive. And even when the findings are partial, they lend themselves to action. That’s what makes them seductive.

This is where the risk grows with the adoption of AI.

From signal to meaning

AI systems will make it easier to access, analyse, and summarise both qualitative and quantitative inputs. More patterns surfaced faster. More dashboards, built on top of compressed interpretation. 

The distance between input and insight will shrink. But so will our tolerance for ambiguity.

The same questions that once required sitting with transcripts, listening for tone, and debating possible meanings will now arrive in tidy clusters. That’s useful—but it’s also dangerous. Because the systems we use won’t tell us what was flattened or filtered out. And over time, we may stop asking.

The real risk isn’t that AI will hallucinate facts.

It’s that humans will hallucinate meaning—from summaries that feel tidy, predictive, and authoritative.

But meaning doesn’t always live in clarity. It often lives in hesitation. In tone. In the part that didn’t make it to the transcript.

And when we adopt these systems—and it’s a question of when, not if—we need to stay clear-eyed about their limits. Not everything important can be captured in words. Not every signal fits into a dashboard. Sometimes, users themselves can’t fully articulate what they feel or need. They grope for words, circle around a truth, or stay silent—not because it’s irrelevant, but because it’s hard to say.

Culture, fear, and self-doubt shape what’s said aloud—and what isn’t.

Or as one line from a Bollywood song reminds us:

Jo bhi main, kehna chahoon,

Barbaad kare, alfaaz mere

(Whatever I try to say,

My words ruin it.)

Rockstar (2011)

That’s why the most important skill product teams must build alongside AI fluency is discernment.

Not intuition-as-guesswork, but discernment as a practiced skill: the ability to notice what’s missing, the judgment to resist over-simplification, and the willingness to sit with ambiguity instead of rushing to complete the task.

And in a world where every team will soon have access to the same AI tools, it becomes a real edge.

Because when insight is automated, it’s discernment that becomes the differentiator. Not just to detect nuance—but to design with it.

About the author

Shrinath V
Shrinath V

Independent Product

Strategy Coach

Shrinath V is an independent product strategy coach who works with senior leaders to identify blind spots and make smarter bets on what lies ahead. With two decades of cross-sector experience, he has coached leaders and teams across industries, and mentored hundreds of startups through the Google for Startups Accelerator—both in India and international markets.

Shrinath is on LinkedIn and writes a blog, Blind Spots to Big Bets, on Substack.