Team KPIs we stopped tracking
Two proposal-team metrics we killed this year. Why each one misread the work, and what we replaced them with. Short, opinionated, written from my own dashboard.
Two KPIs on our internal proposal-team dashboard got retired this year. One was my idea and I was wrong about it. The other was an industry-standard metric that didn’t survive contact with reality. Short post on both.
The one I was wrong about — “words drafted per day”
I wanted a proxy for proposal-team throughput. Number of words the team drafted per working day, averaged over a rolling two weeks. High number, busy team. Low number, blocked team. That was the theory.
Two problems with it.
One, words drafted is the wrong unit. A 200-word executive summary that moves a bid is worth more than 2,000 words of boilerplate pulled from the KB. The metric treated the boilerplate as productivity, which incentivized filler. Nobody gamed it on purpose; the team just got slightly less ruthless about cutting.
Two, the metric didn’t distinguish new writing from revision. A heavy revision week — the kind where three reviewers hammer an exec summary down to a clean two pages — would show as low throughput. That week was often the highest-value week in the bid cycle. The dashboard said the team was slow. The bids said otherwise.
What we track instead: completed proposal sections against deadline, color-coded by review stage. Same intuition — is the work moving? — at the granularity that actually matches the work.
The industry-standard one — “bid win rate”
Bid win rate, as a single number, is a metric that nobody who has run a proposal team for long actually reads. It averages across deal sizes, buyer types, strategic fit, and bid/no-bid discipline. It moves for reasons that have nothing to do with the proposal team’s work.
Two examples from our own log. In Q2 last year our win rate dropped 6 points. The drop was entirely because sales started pursuing a new segment we hadn’t built past-performance in — bids we were always going to lose the first three times. The proposal team wrote better responses in Q2 than in Q1. The dashboard said the opposite.
In Q3 our win rate jumped 9 points. The jump was because one very large customer renewed after an incumbent-defense cycle where we were the incumbent. Renewals dominate the math; chasing them isn’t the same work as winning new logos.
What we track instead: three numbers, separately.
- Bid-qualified win rate — wins divided by bids-past-go-decision. This isolates the proposal team’s work from the bid/no-bid decision.
- Segmented win rate — the same number, broken by customer type. The aggregate hides the segment signal every time.
- Pursued-bid ratio — what fraction of opportunities got past go. This belongs to sales, not to proposals, but we chart it next to win rate so nobody reads the win-rate number alone.
Three numbers, tracked separately, are harder to fit on a dashboard tile than one. They are also closer to true. We make the dashboard tile bigger and live with it.
What we kept
Two things we almost cut and didn’t:
- Median draft-to-submit latency. This is a team-health metric, not a performance metric. When latency creeps, something is broken — usually SME availability. We check it monthly and dig when it moves.
- Color-team review coverage. What fraction of bids clear pink-team, red-team, gold-team? This one shames us occasionally. We keep it.
The test that either metric would have failed
The question I now ask before adding anything to the dashboard: what behavior does this metric reward, and what behavior does it punish? The answer has to be specific. “Rewards high-quality work” is not specific. “Rewards writing 2,000 words of boilerplate and punishes cutting a section down to 200 sharp words” is specific, and it’s the answer I should have written the first time I proposed the words-drafted metric.
Same thing on win rate. What behavior does it reward? Pursuing more bids. What behavior does it punish? Walking away from bids that look winnable but aren’t. Both of those are the wrong incentives for a proposal team. A bid/no-bid filter run well should look like losing fewer bids by pursuing fewer; the aggregate win-rate metric punishes the pursuit-discipline it should reward.
The broader lesson
A metric that doesn’t survive “what work would this incentivize?” should not be on the dashboard. Both the ones we cut failed that test. The three we replaced them with pass it, for now. Check back next year to see which ones I talk about retiring then.
If I were starting a proposal team’s dashboard from scratch today, I’d put three numbers on it and nothing else: bid-qualified win rate (segmented), color-team review coverage, and median draft-to-submit latency. Four, if you add pursued-bid ratio so nobody reads the first number in isolation. Five feels like too many. Six is a dashboard nobody reads.
Two more that I don’t put on dashboards and keep in a separate monthly review: the rate of losing-for-the-same-reason-twice in the same segment, and the rate of KB blocks edited from debriefs that got re-cited in a subsequent bid. Both are slow signals. Both tell you whether the program is compounding or not. Neither belongs on a weekly dashboard because the noise floor on a weekly read is too high.
I also keep a written note every quarter of what metric I’m tempted to add. Most of the time the note from last quarter tells me I already considered that metric and already decided not to. That file is now the most-read doc in my team folder, which is probably the most useful KPI I track — the number of times I almost re-invented a bad metric.