Field notes

Win-loss, Part 3 of 5: the five anti-patterns

Lessons logged and never applied. Themes that don't match answered questions. Five failure modes I see when a win-loss program looks healthy from the outside but isn't moving the win rate.

Sarah Smith 6 min read Research

Part 1 of this series argued for capturing 18 fields per bid. Part 2 argued for a 30-minute debrief with one DRI. This part is about what happens after you do both and the win rate still doesn’t move.

I’ve watched this failure mode at three consulting clients in the last two years. The mechanics of the win-loss program look fine from the outside. The debriefs happen. The notes get written. The dashboard is clean. And the same bid gets lost for the same reason two quarters later. These are the five anti-patterns that produce that outcome.

This is a composite essay drawn from real proposal engagements without naming employers or customers. See the footer note.

Anti-pattern 1 — Lessons logged, never applied

The most common failure. A debrief surfaces a clear theme: “our executive summary was too generic; the buyer couldn’t tell why we’d win their specific business.” That theme gets typed into the CRM’s “lessons learned” field. The next bid starts. Nobody rereads the lessons. The generic executive summary ships again.

The diagnostic: “how many items from the lessons-learned log have shown up in bid reviews this quarter?” If the answer is zero, the log is a compliance artifact, not a learning artifact.

The fix is structural, not behavioral. Lessons have to be attached to the KB block they implicate — not to a separate log. If the lesson is “our exec summary was generic,” the edit goes into the exec summary template block. The next bid that pulls from that block inherits the edit. Part 4 of this series is entirely about this feedback loop.

Anti-pattern 2 — Themes that don’t match answered questions

A buyer says “we went with the incumbent.” The team logs the loss reason as “incumbent bias.” That’s a plausible reading. It’s also, frequently, the wrong one.

“We went with the incumbent” is a polite thing buyers say when they don’t want to explain what your proposal got wrong. Switching costs are real, but they rarely decide a bid that a challenger was otherwise winning. If the challenger’s technical response scored equal or higher and the incumbent won anyway, the tiebreaker was usually price or risk perception — and the buyer didn’t tell you which.

The diagnostic question I ask teams: “when you logged ‘lost to incumbent,’ did the debrief transcript include any claim that would have been disputed if a competitor had heard it?” If not, you don’t have a loss reason. You have a loss narrative.

The fix: stop logging outcomes at the level of “incumbent won” or “price.” Log at the level of “scorecard dimension X was rated below competitor.” If the buyer won’t share the scorecard — most don’t — log the absence and don’t backfill it with a story.

Anti-pattern 3 — Debriefs only after losses

If you only debrief losses, you learn half of what you can learn. The bids you won are the richer data — they include the discriminator language that worked, the exec summary that moved the evaluator, the SME-heavy technical section that cleared pass/fail on the first read.

Most teams skip win debriefs because the win is the goal and the team is already on the next bid. Understandable, and wrong. A 20-minute win debrief ships more durable intelligence than a 45-minute loss debrief, because the win transcript contains quotes like “your response to requirement 4.3 was the clearest we’ve seen.” That is a reusable win theme with a timestamp. You will never get it from a loss debrief.

The fix: debrief the same way, regardless of outcome. Same four questions, same DRI, same format. Win debriefs are shorter on average because the team is happy; they are not less informative.

Anti-pattern 4 — Aggregating across buyer types

A vendor who sells into both federal agencies and mid-market SaaS runs one win-loss log. The federal losses cluster around compliance gaps — FedRAMP, FIPS, ATO sponsorship. The SaaS losses cluster around integration story — our Salesforce connector, our webhook latency. The aggregate report says “we lose on technical fit.” That’s correct at a level of abstraction where nothing is actionable.

The same vendor makes the same mistake a tier down. “Healthcare” is not a segment; “hospital system IT” and “healthcare payer” and “digital health vendor” lose for different reasons. A segment is a category whose members’ losses cluster.

The fix: segment the win-loss data before reading it. If the segmentation produces less than eight bids in a segment over six months, the segment is too narrow for statistical signal and you’re stuck doing qualitative analysis — which is fine, as long as you know that’s what you’re doing.

Anti-pattern 5 — No feedback loop to the KB

This is the one anti-pattern that PursuitAgent specifically exists to address, so I’ll flag the bias: you can do this without our product; you just have to build the mechanism yourself.

A debrief surfaces that the security section was rated weak. The proposal manager takes notes. The security SME is in the debrief and hears the feedback. Neither of them updates the security KB block. Three months later the same security language ships on the next bid.

The fix, which I’ve watched teams implement with a Notion doc and a quarterly review and no PursuitAgent at all: every debrief comment that implicates a KB block creates a task to edit that block. The task has an owner — usually the SME who drafted the original — and a deadline. The block shows an “edit requested” flag until the edit ships and a reviewer closes it.

This is administrative. It’s not glamorous. It’s the thing that separates a win-loss program that compounds from one that doesn’t.

A few more, briefly

Five was the headline. A handful more I’ve watched:

  • Debrief conducted by the salesperson who lost the bid. Defensive narrative or corrective narrative — neither is the buyer’s view. A third party should run the debrief.
  • Win rate reported as a single number. Bid, pursued-bid, and qualified-bid win rates tell three different stories. Teams that report one number often report the flattering one.
  • Debriefs saved as PDFs. PDFs aren’t queryable. A folder of PDFs is write-only memory.
  • No cross-reference to the compliance matrix. If the compliance matrix and the win-loss log don’t join, you lose the signal that connects scored requirements to repeat failure modes.
  • Treating the debrief as a venue for blame. An adversarial debrief gets shorter every quarter. An analytical one gets longer and more useful.
  • One-off debriefs that never get re-read. Re-read the prior quarter’s debriefs before starting the next one. If you can’t remember what you learned, you didn’t learn it.

Part 4, next Wednesday, is the KB feedback loop — exactly how a debrief comment becomes a block edit that ships into the next bid.


Sources

  1. 1. APMP Body of Knowledge — Post-submission review
  2. 2. PursuitAgent — Debrief rituals that actually run
  3. 3. PursuitAgent — Win-loss intelligence starts on day one