Field notes

Shipped: the win-loss dashboard with debrief capture

We shipped the win-loss dashboard last week. It's the feature behind this month's series. Debrief capture, theme clustering, and KB write-back, all wired to the schema.

PursuitAgent 3 min read Engineering

Last week we shipped the win-loss dashboard. It’s the feature behind the five-part series running this month, and the surfaced view of the theme-clustering and pair-analysis capabilities the Win-Loss Intelligence page describes. This is the changelog.

What’s in it

Debrief capture. A 30-minute structured form attached to every proposal record. The form fills in proposal_outcome and debrief_note rows in the schema we walked through Tuesday. Each note can target a specific KB block, with a suggested edit, queued for accept/decline/defer.

Theme clustering. Nightly clustering over proposal_theme assertions, surfacing repeat themes with associated win rate and buyer-diversity score. The mechanism is in Thursday’s post.

KB write-back. Approved debrief edits become kb_block revisions with provenance back to the originating bid and debrief. The next bid drafting against that block sees the updated content.

Outcome ingestion. Three sources today: manual entry, buyer email parser (opt-in), and GAO decision import for federal bids that went to protest. Public-record import for state procurement portals lands in March.

Diff view. For repeat buyers, the dashboard shows what changed since the last proposal to that buyer — themes added, themes retired, blocks updated. Compresses the debrief read.

What’s not in it

No competitor field. We covered why in the schema post: the data was too noisy to be useful in v1. Coming back to it when we can connect it to a real signal.

No automated “lesson learned” generation. The debrief notes are written by humans. The dashboard surfaces patterns; it doesn’t paraphrase them into an executive summary nobody trusts.

Where it works today

For teams with at least 30 closed proposals in the system. Below 30, the clustering doesn’t have enough density to produce meaningful themes. Below 60, the win-rate column has wide error bars on most clusters. The dashboard tells you both, with member counts visible.

Docs

The full how-to is at docs.bidforge.com/win-loss. The schema and clustering posts above are the engineering side of the same feature.

How to turn it on

For existing customers, the dashboard appears under Intelligence > Win-Loss. Existing closed proposals are backfilled into proposal_outcome rows with status pending and an empty stated_reason — the team fills these in via the debrief form, or imports them from a CSV via the Settings page. The clustering job runs nightly across all closed-with-outcome proposals.

The buyer-email parser is opt-in per company. It scans a designated mailbox (typically awards@yourcompany.com or similar) for award notifications and pulls the stated reason into the dashboard for the proposal manager to confirm. We do not auto-apply parser output; every parsed outcome has a “confirm” step.

Pricing

The dashboard is included in the existing tier we charge for the win-loss capture and provenance graph. No additional line item for the clustering or evidence linking. The reason for that pricing decision is that the dashboard is the part of win-loss intelligence that finally connects the captured data to KB edits — without it, the capture work is half-finished, and we don’t want pricing to encourage half-finished use of the product.

What ships next

The week of February 8th brings the evidence-linking work — every debrief note becoming a KB-block edit suggestion automatically. That’s the feature that closes the loop the schema was designed for. The week after, the per-block win/loss correlation lands, exposing which KB blocks correlate with winning or losing bids and surfacing them for review.

Sources

  1. 1. The win-loss database schema, explained
  2. 2. Clustering win themes across 200 past bids
  3. 3. Shipped — win-loss pair capture