From Invisible to Indispensable: What Two Years of Engineering Intelligence Taught Me

From Invisible to Indispensable: What Two Years of Engineering Intelligence Taught Me

7 min read
EngineeringLeadershipTools & Productivity

Engineering teams generate enormous signal - commits, PRs, deployments, incidents. For most of my career, that signal just disappeared into the ether. We were busy shipping. Who had time to study the patterns?

That changed when I started working seriously with software engineering intelligence (SEI) tools. What I expected was better dashboards. What I got was a forcing function for every hard conversation a engineering leader avoids: Are we actually improving? Where is the waste hiding? What does 'good' even look like for us?

Two-plus years into that journey, here's what I actually learned - and what I wish someone had told me before I started.

The Measurement Trap Nobody Warns You About

Most teams approach engineering metrics like they approach KPIs in sales: pick a number, set a target, incentivize toward it. This works terribly in engineering.

When you start measuring cycle time, people optimise cycle time - sometimes at the expense of quality or collaboration. When you start measuring PR throughput, you get smaller PRs that don't always reflect real progress. Goodhart's Law is merciless in software: the moment a measure becomes a target, it ceases to be a good measure.

The teams that do this well start from a different premise. Instead of 'what do we want to improve?', they ask 'what does our data actually say about how we work?' They use metrics as a diagnostic tool, not a performance tool. The difference sounds subtle. In practice it determines whether you build a culture of learning or a culture of gaming.

Three Phases That Nobody Skips

Every organisation I've seen adopt engineering intelligence tools goes through roughly the same arc, whether they plan to or not.

Phase 1: Build Trust

This phase is entirely about legitimacy. Before any metric can drive behaviour change, engineers need to believe the data is fair, the framing is honest, and the intent is improvement - not surveillance.

The fastest way to destroy this trust is to start sharing individual developer metrics with managers before the team has had a chance to understand and contextualise the data themselves. Once that happens, you're not running an engineering intelligence program - you're running a monitoring program, and everyone knows it.

The teams that build trust effectively share data with engineers first. They treat the initial metrics conversations as retrospectives, not reviews. They invite teams to challenge the data and flag where it's misleading. That process is slow. It's also the only foundation that holds.

Phase 2: Operationalise

Once the data is trusted, the question becomes: how does it change how we work day to day? This is where most organisations stall.

It's not enough to have a dashboard that shows you're slow at code review. You need a workflow that surfaces the slow review at the right moment - when someone can actually do something about it. You need the data embedded into the existing rituals: standups, planning, 1-on-1s, retrospectives. Not as an additional reporting overhead, but as a shared language that makes those conversations faster and more concrete.

The teams that do this well don't schedule a 'metrics review meeting'. They change what they look at when they open a PR. They change the first question asked in a standup when something is blocked. The measurement becomes invisible because it's woven into the work.

Phase 3: Accelerate with AI

This is where it gets genuinely exciting - and genuinely risky.

AI-assisted tools - from code generation to automated PR summaries to anomaly detection in deployment pipelines - can dramatically amplify the productivity of a team that already has healthy practices. The same tools applied to a team that hasn't done the trust and operationalisation work tend to make existing dysfunction faster and harder to see.

The signal I watch for: are engineers using AI suggestions critically or reflexively? Reflexive acceptance of AI output without review is the engineering equivalent of copy-pasting from Stack Overflow without reading the answer. Velocity goes up. Understanding goes down. Technical debt accumulates in new and creative ways.

The teams getting this right are the ones who treat AI as a collaborator that needs calibration, not a vendor that has been paid to be correct.

What the Data Reveals That Your Processes Hide

One of the most consistent surprises when teams start working with engineering intelligence data seriously: the bottlenecks almost never are where you thought they were.

Teams that complain about slow delivery typically have a review bottleneck, not a development bottleneck. Teams that complain about quality issues typically have a planning problem - work that arrives at development under-specified, generating rework cycles that look like bugs. Teams that think they have a people problem often have a context-switching problem - too many parallel workstreams generating coordination overhead that swamps individual capacity.

None of this is obvious from standups or retrospectives. It requires looking at the shape of how work actually moves - not how we describe it in planning sessions.

The Cultural Shift Underneath the Tooling

The organisations where SEI adoption genuinely transforms output share a cultural trait that I don't think you can install with tooling: psychological safety around being slow.

If surfacing that your team has a long review cycle feels dangerous - if the instinct is to explain it away or blame adjacent teams before investigating it - you'll use the data defensively. You'll surface the metrics that make you look good and bury the ones that don't.

But if the culture treats 'we found a bottleneck' as good news - evidence that the system is working, that you now have something specific to improve - then the data becomes genuinely useful. The measurement isn't the transformation. The transformation is in how the organisation relates to the truth the measurement surfaces.

Making the Business Case (Without Lying)

Engineering intelligence is expensive - in tool cost, in implementation time, in the management overhead of doing the trust-building phase properly. At some point, someone will ask if it's worth it.

The honest answer is: it depends on what you do with it. The tool is not the investment. The investment is in building the practices, the habits, the shared language, and the psychological safety that allows the data to change behaviour. Teams that treat the tool as the end state see modest gains. Teams that treat the tool as the beginning of a learning program tend to see compounding returns - not because the metrics went up, but because the team got systematically better at diagnosing and fixing their own constraints.

If I were making the case to a CFO, I wouldn't lead with developer productivity percentages. I'd lead with decision quality: how many months of wrong-direction work did we avoid because we could see the data clearly? How many performance conversations became coaching conversations because we had shared context? How much faster did we onboard new engineers because we could show them exactly where the team's flow was and wasn't working?

Those returns are harder to quantify. They're also the ones that compound.

What I'd Do Differently

If I were starting this journey again, I'd spend more time on the trust phase and less time on the dashboard. I'd involve engineers in defining what 'good' looks like before I started measuring anything. I'd resist the pull toward comprehensive measurement - picking three metrics you act on beats tracking thirty that you review monthly.

And I'd be honest with myself and my team about the difference between measuring to improve and measuring to report. The former changes how you work. The latter just changes what you say in the quarterly business review.

The signal was always there. Learning to read it - and to act on what it actually says rather than what we hoped it would say - is the real work.

Sudhir Dharan
Written by
Sudhir Dharan

Engineering leader with 19+ years of experience building and scaling high-performance teams. Passionate about engineering culture, AI adoption, and growing the next generation of tech leaders.

Comments