Proposal post-mortems: the discipline, the template, the follow-through
The canonical long-read on proposal post-mortems. What a post-mortem actually accomplishes, the template that makes the discipline sustainable, how to get a debrief from the buyer, and the three follow-through patterns that work.
Most proposal teams either skip post-mortems entirely or run them as a 30-minute Zoom that produces no follow-through. Skipped, they leave the compounding loop open. Run as a vent, they feel productive and change nothing. The discipline — the part that makes the next bid materially easier than the last one — lives in three places: what the post-mortem actually accomplishes, the template that structures it, and the follow-through that drags its conclusions into the next bid’s capture plan.
This post is the long version of all three. It’s the piece I wish I’d been handed my first year running a proposal function. It includes a template I use, copy-pasteable. It includes the sample email I send to buyers asking for a debrief. It includes the follow-through patterns I’ve watched work and the ones I’ve watched fail. At the end, I’ll name what software has to do to keep this sustainable — which is not the same as “what software does today.”
What the post-mortem actually accomplishes
A post-mortem is not a group-therapy session. It is the production of three specific outputs. If the post-mortem produces all three, it’s a post-mortem. If it produces two, it’s an incomplete review. If it produces none, it’s a wake.
Output 1 — KB updates. Every bid draws on a knowledge base. Past-performance entries. Product blocks. Policy statements. Win themes. The post-mortem’s job is to identify, for this specific bid, which blocks carried weight and which need work. A block that showed up in five sections and reviewers flagged as “strong” is promoted — tagged as load-bearing, not to be rewritten without cause. A block that got rewritten heavily in three places is marked stale and added to the rewrite queue with an owner. A block that would have been useful but didn’t exist gets a gap ticket — an entry for the KB-gap queue with a named owner and a due date.
Without Output 1, the library doesn’t improve. It accretes. Eight months later, the library is bloated with blocks that nobody has evaluated in context, and retrieval gives the drafter a mix of good content and rot. Shelf’s research on outdated knowledge bases names this decay pattern: the library’s content drifts further from what the organization can actually deliver, and the trust the library once enjoyed erodes.
Output 2 — Win theme retirement and promotion. Every proposal commits to three to five win themes in the capture plan. The post-mortem judges each one. Verdicts come in three flavors. Promote — the theme landed in the response, showed up in the debrief (where a debrief occurred), and has evidence of moving the score. Use it again on similar pursuits. Retire — the theme didn’t land, the evaluator didn’t mention it, the draft struggled to carry it. Stop using it. Rewrite — the kernel of the theme is right but the phrasing isn’t landing. Rework the phrasing; keep the underlying claim.
Without Output 2, the theme library becomes a junk drawer. Every new pursuit pulls from it without any signal about which themes have evidence behind them and which don’t. PropLibrary’s swap test is one filter; post-mortem evidence is the other. Both filters are needed. The swap test asks “could a competitor say this?” The post-mortem asks “did this theme actually work when we said it?”
Output 3 — Capture-process learnings. The most uncomfortable output and the most underrated one. For each bid, the post-mortem asks: what did we learn about this buyer, this category, this competitive set, our own internal workflow? Every bid produces intelligence the next bid’s capture lead can use. Who advocated inside the buyer. What their evaluation panel actually weighted vs. what the RFP said they’d weight. Which of our internal SMEs were available on the timeline vs. which ones weren’t. How long the procurement cycle actually ran. What the incumbent did that we hadn’t anticipated.
Without Output 3, capture starts from zero on every bid. Every capture lead rediscovers every buyer. Every proposal manager recalibrates from nothing.
If a post-mortem doesn’t produce all three, it did not do its job. Naming the three outputs is the first move. The rest of this post is about how to actually produce them.
The template, fully shown
This is the template I use. It’s copy-pasteable. Adapt it to your context — add sections your team needs, drop sections that don’t apply — but keep the core.
Section 1 — Bid summary
A factual paragraph. Four sentences maximum.
- Opportunity: one sentence naming the buyer, the scope, and the contract shape (services, license, multi-year, etc.).
- Proposed solution: one sentence naming what the team proposed at the highest level.
- Commercials: one sentence with price, term, and any unusual terms.
- Disposition: one sentence with the outcome (won, lost, withdrawn, no-decision) and the date.
Nothing interpretive. This section is the record-of-record — six months from now, someone reading this post-mortem should know immediately what bid it’s about.
Section 2 — What we proposed
Three sub-parts.
- Win themes committed: the three to five themes the capture plan named.
- Response structure: how the technical approach was structured (rubric-mirroring, our own taxonomy, hybrid). One paragraph.
- Key differentiators claimed: the specific discriminators (not features) the response led with. For the difference between discriminators and features, see discriminator-vs-feature.
Section 3 — What we learned
Four sub-parts. This is the section that takes the longest to write and produces the most value.
- Win themes — per-theme verdict. For each theme: promote, retire, or rewrite. One sentence of evidence per verdict (the debrief quote, the reviewer’s comment, the evaluator’s line in the scoring summary).
- Content blocks — what carried weight, what didn’t. A short list of KB blocks that appeared in the response, tagged: load-bearing and good, load-bearing but stale, missing entirely (gap to fill). Three to ten blocks is typical; more than twenty is a sign the reviewer was cataloguing instead of judging.
- Capture intelligence. Three to five sentences of specific things we learned about this buyer or this competitive set that we did not know at kickoff.
- Internal workflow. One paragraph on the bid’s internal rhythm: what worked, what broke, what we’d do differently on a structurally-similar bid.
Section 4 — What changes in the KB
A bulleted list. Per item: block identifier, proposed change, owner, due date.
This section is the concrete output. If the post-mortem ends without this section populated, Output 1 hasn’t been produced.
Section 5 — What changes in capture
A bulleted list of named changes to the capture playbook, if any. Not every bid produces capture changes; some produce zero. When a bid does produce them, they get named here with an owner and a date.
Section 6 — Three follow-up tickets
Three tickets, with owners and dates. Not five. Not twelve. Three.
The ticket count is the single most-argued-over part of this template in my experience. Teams want more. Teams feel that capping at three under-represents the lessons learned. The reason to cap: post-mortems with twelve action items produce zero action items. Post-mortems with three action items produce, in my experience, about two of them actually shipping. Two shipped beats twelve filed-and-forgotten every time.
The three tickets are selected by the proposal manager with the team’s input. They are the three changes that, if shipped before the next comparable bid, would most move the team’s chances. Not the three most interesting observations. Not the three easiest changes. The three most-moving.
Section 7 — Debrief notes (if any)
If the team obtained a debrief from the buyer, its contents summarize here. Raw notes go in an appendix; this section is the two-paragraph synthesis that the next capture lead will read.
If the team didn’t obtain a debrief, that gets noted here with the reason. “We asked and were declined.” “We didn’t ask.” “The buyer doesn’t do debriefs.” Each is a different lesson for the next time.
That’s the template. Seven sections. A typical post-mortem document runs 800 to 1,500 words. A comprehensive one on a high-value bid might run 3,000. Ten thousand words is a signal that the reviewer stopped filtering; trim.
The debrief call, which is the hardest part
A post-mortem without a debrief is guessing. A post-mortem with a debrief has a source. The debrief is the buyer’s voice in the record, and it’s the single highest-leverage input the post-mortem can include.
Most teams don’t get debriefs because most teams don’t ask for them. The ones that ask don’t ask skillfully. The ones that ask skillfully often get them.
Requesting a debrief. Here is the email language I use. Adapt it to your tone; the shape matters more than the exact words.
Subject: request for feedback on [Project Name] proposal
Hi [Name],
I understand the award has gone to [vendor or “another vendor”]. Thank you for the time your team invested in evaluating our response — we know the volume of work involved on your side, and we appreciated the chance to participate.
We would find it genuinely useful to understand, at whatever level of detail your procurement policies allow, where our response was strong and where it fell short of your needs. Specific areas we’re curious about:
- How our technical approach compared to the approach that won, particularly on [the scoring area we most invested in].
- Whether our past-performance citations were credibly mapped to what you were evaluating.
- Whether there were aspects of the evaluation we missed addressing entirely.
A 30-minute call at your convenience in the next two weeks, or a short written note if a call isn’t feasible, would be welcome.
Thank you again, [Name]
What this email does: it acknowledges the loss without conceding quality, it names specific areas of curiosity (so the evaluator isn’t being asked to summarize the entire evaluation), and it offers written feedback as an alternative so the answer isn’t a binary yes-or-no to a meeting. The phrase “whatever level of detail your procurement policies allow” gives the buyer a clean exit if they have restrictions; it also signals you understand the landscape.
What to ask on the call, if you get one. Four questions, in order:
- “What factors weighed most heavily in your selection?” (Lets the evaluator lead.)
- “Where did our response fall short of what you needed?” (The honest answer to this is the most valuable minute of the call.)
- “Were there aspects of our response that you found particularly effective?” (Tells you which themes landed, for Output 2.)
- “Looking forward, is there anything about our offering you’d want to see before a future procurement?” (Opens the next pursuit.)
Let the evaluator talk. Take notes. Do not argue a single point during the call. Nothing you say during the debrief changes the award decision, and every defensive response you offer shortens the next debrief you try to schedule with this buyer.
Why losing-bid debriefs are gold. The losing debrief has two kinds of content a winning debrief doesn’t have. First, it tells you what the winner did better — which is intelligence about your competitors you cannot otherwise acquire. Second, it tells you what in your own response failed, which is intelligence about your own blind spots that your internal reviewers almost certainly couldn’t surface themselves.
The winning debrief is also valuable — it tells you which themes landed, which citations the panel found convincing, which discriminators moved the score — but the losing debrief is usually the more educational of the two.
The follow-through problem
This is where most teams break. The template gets filled. The debrief gets conducted. The record gets written. And then nothing changes in the next bid because the post-mortem lives in a folder nobody opens.
Leulu’s writing on this names the pattern specifically: “The debrief ends, the document is published, people move on. Nobody asks at sprint planning: ‘what happened to the actions from last week’s incident?’” The discipline that closes the loop has to be structural. Relying on memory, or on the proposal manager’s personal follow-up, fails in the second month and stays failed.
Three patterns I’ve watched work.
Pattern 1 — KB write-back as a closing checklist item. The post-mortem is not closed until Section 4 (KB changes) has produced a merged pull request or a completed task in the KB. The proposal manager cannot mark the bid’s record “closed” in the system until every proposed KB change has either been applied, rejected with a written rationale, or deferred with an explicit owner and deadline.
This pattern works because closure is a state the proposal manager cares about (pending bids on the dashboard are a distraction). If closure requires write-back, write-back happens. If closure is decorative, write-back doesn’t.
Pattern 2 — Weekly review ritual that pulls open follow-ups. Every Monday morning, the proposal team’s stand-up includes a five-minute item: “open post-mortem tickets over 14 days old.” The tickets read themselves into the meeting. Each overdue ticket gets a decision: this week, next week, or drop. Dropping is allowed but must be named — a ticket marked “drop” with no rationale gets re-opened at the next review.
This pattern works because the review is brief and because dropping is legitimate. Teams that refuse to allow dropping end up with a ticket graveyard that the team learns to ignore; teams that allow clean drops without rationale end up not learning. Dropping with a named rationale is the sweet spot.
Pattern 3 — Ownership baked into the next quarter’s scoring. The three follow-up tickets have owners and dates. Those tickets are part of the owner’s quarterly deliverables. The owner’s performance conversation at the end of the quarter includes whether the tickets shipped. Not as the primary metric, but as a named input.
This pattern is the heaviest of the three. It requires manager buy-in and a performance framework that can carry it. When it works, it works hardest. When a manager doesn’t adopt it, it doesn’t work at all. I mention it because teams that are serious about compounding their proposal function usually end up here eventually.
The three patterns can be combined. The teams I’ve watched build the strongest post-mortem discipline do all three at once: closure requires write-back, weekly reviews pull overdue tickets, and owners are accountable at quarter-end for the three tickets they took.
What software has to do
Post-mortem discipline is mostly a human discipline. Software doesn’t make teams honest, doesn’t produce follow-through, and doesn’t substitute for the debrief call. What software can do is remove friction from the parts of the discipline that are frictional.
Win/loss pair capture. When a proposal hits a terminal status, the system captures the response-as-submitted, the compliance matrix, the capture plan, and the drafting logs into a single immutable record. This is what we shipped on Day 85 — win/loss pair capture. It’s the prerequisite. Without it, the post-mortem is working from the proposal manager’s memory.
Theme effectiveness tracking. Every win theme has a record across every bid it was used on. Over time, a theme accumulates evidence: which bids it was used on, which of those bids won or lost, whether debriefs mentioned it. A theme that’s been used on eight bids, won three, and been mentioned in zero debriefs is a weak theme. A theme that’s been used on four bids, won three, and been mentioned specifically in two debriefs is a strong theme. The software can surface this at capture-planning time — when the capture lead is choosing themes for a new pursuit, weak and strong themes are visibly marked.
Block freshness tied to win/loss outcome. A content block that appeared in five recent won bids is load-bearing. A block that appeared in five recent lost bids might be signaling something; at minimum it deserves scrutiny. Software can rotate the freshness state of blocks based on downstream outcomes, surfacing blocks that should be reviewed even if a human hasn’t explicitly flagged them. See shipped-freshness-scores for the underlying mechanism.
What software cannot do. It cannot run the debrief call. It cannot decide which three tickets are the right three. It cannot make a proposal manager who is uninterested in post-mortems pretend to be interested. The discipline is the discipline; software reduces the paperwork but does not manufacture the will.
Closing
The compounding claim at the heart of how PursuitAgent positions itself — every RFP you win makes the next one easier — lives or dies in Stage 8 of the pipeline. Without a disciplined post-mortem, every bid starts from zero. With one, the library, the themes, and the capture intelligence accumulate in a way the next pursuit can actually draw on.
The discipline is concrete. Three outputs: KB updates, theme verdicts, capture learnings. A template of seven sections, copy-pasteable, that produces the three outputs. A debrief call, requested with a specific email shape, conducted with four questions. Three follow-through patterns that make the record dig its way into the next bid.
For the stage this piece sits inside, see the 8-stage RFP response pipeline — Stage 8 is where this lives. For the theme library the post-mortem feeds and draws from, see the win themes field guide. For the software underneath, see shipped: win/loss pair capture. The pieces connect; the discipline is what makes the connections load-bearing.
Sources
- 1. Shipley Proposal Guide (7th ed.) — After-action reviews
- 2. APMP Body of Knowledge — Proposal lifecycle
- 3. PropLibrary — Proposal win themes: the good, the bad, and six examples
- 4. Lohfeld Consulting — How to fix the proposal processes holding you back
- 5. Leulu & Co. — The proposal post-mortem: what your losses can teach you besides humility
See grounded retrieval in the product.
Start a trial workspace and watch PursuitAgent draft cited answers from the documents you provide.