Field notes

The 8-stage RFP response pipeline, explained

A canonical long-read on how a mature proposal shop actually moves an RFP from the hand-off email through submission and the post-mortem that feeds the next bid. Eight stages, what each one owns, and where each one fails.

Sarah Smith 14 min read RFP Mechanics

A proposal is a manufactured object. Someone fabricates it, end to end, in a known sequence of steps, to a quality bar, against a deadline. Treating it as anything other than a manufactured object is where most of the trouble starts.

This post is the long-form version of a conversation I have with every proposal lead who joins a team I’m advising. There are eight stages. They are in order. You can compress them, you can parallelize parts of them, but you cannot skip them and you cannot rearrange them without paying for it.

The stages are: Intake, Bid/No-Bid, Capture, Compliance, Draft, Color-Team Review, Submit, and Post-Mortem. I’ll walk through each one — what it owns, who owns it, what goes wrong, and what good looks like.

Stage 1 — Intake

What it is. The RFP arrives. Someone receives it. The “it” is usually a PDF, sometimes a portal link, sometimes a Word document that was last saved by somebody in 2019 who meant to fix the formatting. Intake is the act of turning the inbound artifact into a proposal record: a dated, versioned, categorized thing that the proposal function can respond to.

Who owns it. An ops or capture lead. If there isn’t one, it defaults to whoever got the email — usually an account executive or sales engineer — and that is where everything starts to go wrong.

What goes wrong. Four things, in order of frequency:

  1. The RFP sits in an inbox for three days because nobody has been nominated to receive it.
  2. The RFP is extracted into a spreadsheet by hand — 250 line items, split between four tabs — and the spreadsheet becomes the source of truth instead of the PDF.
  3. Addenda, modifications, and Q&A responses land after intake and nobody updates the record. The team responds to v1 while the buyer has posted v3.
  4. Attachments — the scoring rubric, the compliance matrix template, the technical appendix — are missed. Three weeks later someone discovers that the buyer’s evaluation framework was an attached Word doc nobody opened.

What good looks like. A single person receives inbound RFPs into a single tool. The tool timestamps the intake, fingerprints the document, and creates a record that addenda can be attached to without replacing the original. Every attachment is extracted and indexed. An automated check surfaces the compliance language (“offeror shall submit,” “offeror must provide”) so the response can be structured against it from hour one.

The VisibleThread team has written repeatedly that rushing into writing without fully understanding the requirements is the leading cause of proposal failure. Intake is where that understanding is either built in or missed. Cheap stage, huge downstream leverage.

Stage 2 — Bid/No-Bid

What it is. The deliberate decision of whether to respond. Not “can we respond” — of course we can respond — but “should we.” A bid/no-bid is made against a small set of variables: strategic fit, probability of win, cost to produce the response, and opportunity cost of the people who would be pulled onto the response.

Who owns it. A senior commercial owner — VP of Sales, Head of Proposals, CFO, or the founder. Never the person who received the RFP.

What goes wrong. Teams say yes to everything. The default posture in most B2B vendors is that declining to respond is “leaving money on the table.” Teams that say yes to everything run their proposal engines at 100% utilization responding to bids they can’t win, while the bids they could win don’t get the attention they deserve. A senior sales engineer at a healthcare-IT vendor described a quarter to me in which they wrote 22 responses. They won one. The 21 losses weren’t low-probability gambles — they were bids nobody sat down and made a real decision about.

The second failure mode: bid/no-bid as a ritual rather than a decision. A 15-minute Zoom where the right answer is already baked in, the proposal manager nods along, and the team writes the bid anyway. If the decision isn’t made against a written scoring framework, it isn’t a decision.

What good looks like. A scoring model with five variables:

  • Strategic fit — does this customer move the business forward, or is it a cul-de-sac?
  • Probability of win — based on incumbency, relationship, fit, past RFPs from this buyer.
  • Cost to produce — real hours, including SME time.
  • Opportunity cost — what else would the team do this week?
  • Deal quality — ACV, gross margin, payment terms, reference value.

Score each dimension 1–5. Set a floor. A bid that doesn’t clear the floor is a no-bid with a written rationale. Teams that adopt this pattern typically halve their response volume in a quarter and double their win rate on the bids they submit. I don’t have permission to publish the specific numbers from the teams I’ve seen do this, but the pattern is consistent.

Stage 3 — Capture

What it is. The period — typically two to eight weeks — between “we are responding to this RFP” and “writing begins.” Capture is the work of understanding the buyer: their explicit evaluation criteria, their unstated priorities, their procurement-side constraints, their known vendor shortlist, and the politics of who on the buyer side is advocating for what.

Who owns it. A capture lead, distinct from the proposal manager. Capture is sales work. Proposal management is production work. A single person can do both in a small shop, but the mental modes are different.

What goes wrong. Most teams skip capture. They move straight from bid decision to writing. The result is a response that addresses the letter of the RFP and none of the unwritten priorities. Every reader on the buyer side reads that response as generic, because every other reader’s mental model of what matters isn’t reflected in the text.

The second failure mode: capture work happens, but it’s informal. It lives in one person’s head or in a chain of Slack messages. When that person goes on vacation or leaves the company, capture intelligence evaporates. The next RFP from the same buyer starts from zero.

What good looks like. A written capture plan that enumerates:

  • The buyer’s strategic initiative this RFP supports.
  • The evaluation panel — named humans, known roles, known preferences where discoverable.
  • The incumbent (if any), their likely renewal posture, their known weaknesses.
  • The three to five win themes that will run through the response.
  • The known disqualifiers — clauses we can’t accept, certifications we don’t have, prior history we need to address.

Capture plans live as documents, not as conversation. They are re-read before every section of the response is drafted.

Stage 4 — Compliance

What it is. Compliance is the act of making a map from the RFP’s requirements to the response’s structure. Every “shall,” “must,” “will provide,” and “describe” in the RFP becomes a line in a compliance matrix. Each line gets a pointer to the section of your response that answers it.

Who owns it. The proposal manager, working with an analyst or, increasingly, with software that extracts requirements at intake.

What goes wrong. Compliance matrices are built late. They’re built by copy-pasting the RFP text into Excel, which takes a day. By the time the matrix is ready, the first drafters are already three sections deep into a response structured on the team’s internal preferences instead of the buyer’s explicit categories. The matrix becomes a post-hoc audit tool instead of a forward-looking scaffold.

The second failure: the matrix is built once and not updated as addenda land. The buyer posts a modification on day 12 that adds three new requirements. Unless the matrix catches the diff, those three requirements get answered by accident or not at all.

What good looks like. A living compliance matrix with a row per requirement, a column per response section, and an automated diff against addenda. The response is structured to match the matrix, not the team’s writing preferences. Every response section has a visible badge showing the requirements it addresses. Reviewers can filter by “unaddressed” in the review tool. On submission, every row in the matrix has a non-null response pointer or an explicit “acknowledged, no response required” note.

Stage 5 — Draft

What it is. The part people think is the whole thing. Actual writing. First-pass answers to the sectioned questions, each grounded in a source the responder trusts.

Who owns it. Writers — SMEs, proposal writers, presales engineers. In mature shops, a proposal writer drafts from SME inputs rather than asking SMEs to write. In immature shops, SMEs write directly, and the writing reflects the fact that they are engineers or lawyers or finance leads being asked to be communicators.

What goes wrong. Three failures, in order of severity.

  1. SME bottleneck. Qorus reported that 48% of proposal teams have named SME collaboration as their top challenge for five consecutive years. The best engineers are also the busiest. Proposal work competes with billable work and loses. SMEs respond late, or not at all, or in fragments that require an editor to interpret. The proposal manager spends more time chasing SME responses than building strategy — Lohfeld Consulting named this the single biggest fixable problem in a proposal shop.
  2. Generic draft. The draft reads like it could be about any company responding to any buyer. Win themes from the capture plan are mentioned once in an executive summary and then abandoned. The response is technically correct and commercially invisible.
  3. Drafts that don’t cite. When a response makes a factual claim — about our own product, our own past performance, our own compliance posture — the claim isn’t tied to a source the reviewer can check. This is the failure mode that grounded AI is meant to remove, and often the failure mode that grounded AI introduces a new version of.

What good looks like. Writers draft from a knowledge base of approved, citable content blocks. Each block is versioned and has a clear owner. SMEs are asked to review and approve a draft, not to write from scratch. Win themes are explicit threads through every major section, not slogans in a cover letter. Every substantive claim is traceable to a source document that a compliance or finance reviewer can verify in one click.

Stage 6 — Color-Team Review

What it is. Staged reviews at defined intervals. The Shipley tradition uses color codes: pink team (structural review, ~30% drafted), red team (content review, ~80% drafted), gold team (final review for win themes and compliance, ~95% drafted), and white team (post-submission retrospective, sometimes merged with the post-mortem).

Who owns it. A proposal manager, with named reviewers for each stage. Reviewers are not the drafters. A drafter reviewing their own work is not a review — it is an edit.

What goes wrong. Teams treat color reviews as scheduling targets, not as substantive discipline. The pink team happens, but the review is an email with four comments. The red team is cancelled because the draft isn’t far enough along. The gold team runs on the day of submission, which means every change from the gold team is an emergency change.

The second failure: reviews that surface concerns that nobody acts on. A red-team reviewer says “the technical approach doesn’t differentiate from the incumbent” and the team notes it in a tracker and ships anyway because the deadline is close. The review was a safety valve the team didn’t open.

What good looks like. A review calendar with dates and named reviewers, set at kickoff. Pink team has a structured rubric: does the response structure match the compliance matrix? Red team has a structured rubric: do the win themes appear in every major section with evidence? Gold team has a structured rubric: does any sentence in the response lack a source a reviewer can verify? Reviews produce tracked action items with owners and deadlines before the submission day, and the submission only goes out when every item is closed or deliberately deferred with a written rationale.

Stage 7 — Submit

What it is. Sending the finished response. The mechanical stage. The one everyone assumes is trivial and the one that has ruined more proposals than any other.

Who owns it. The proposal manager or an ops lead. Not the writer.

What goes wrong. Portals that expect a specific file-naming convention reject uploads silently. Buyers specify “submit as a single PDF” and the team submits a zip. The response is submitted 30 minutes before the deadline and the portal goes down. The submission goes out without the required signed cover letter. These sound trivial until one of them disqualifies a 5 million dollar bid.

What good looks like. A submission checklist — specific to this buyer’s portal and format requirements — built during intake and finalized during the gold-team review. A dry-run submission in a sandbox account. A submission day that starts eight hours before the deadline, not 30 minutes. A submission log with timestamps, receipts, and the name of the person who clicked the final submit.

Stage 8 — Post-Mortem

What it is. After the buyer’s decision, a deliberate retrospective. What did we propose? What were we graded on? What did we get right? What did we get wrong? What do we know now that we didn’t know at kickoff? What win themes worked and which ones didn’t?

Who owns it. The proposal manager, with participation from the capture lead, primary writers, and — if the team can get it — someone from the buyer’s evaluation panel. Debrief calls with buyers who selected you (and buyers who didn’t) are the highest-leverage intelligence a proposal function has access to.

What goes wrong. Post-mortems don’t happen. The team wins — and moves to the next bid. The team loses — and moves to the next bid. Either way, the structured learning that would make the next proposal better doesn’t get captured. The same win themes get reused without evidence of whether they worked. The same response structure gets reused without reference to the scoring rubric that would have rewarded a different structure. Every RFP starts from zero.

The second failure: post-mortems that happen but don’t update anything. The team holds a call, takes notes, and the notes live in a shared drive. The next proposal is written by someone who didn’t attend the call, didn’t read the notes, and doesn’t know the team learned anything.

What good looks like. A post-mortem that writes its output back into the knowledge base. Win themes that worked get promoted and reused. Win themes that didn’t get retired. Content blocks get updated based on what reviewers flagged. The next bid starts with the last bid’s intelligence baked in.

This is what I mean when I say a proposal function can compound. Without a post-mortem, every RFP starts at zero. With a disciplined post-mortem that updates the KB, every RFP starts from the accumulated intelligence of every bid that came before it. The compounding is the entire edge.

The compounding claim

PursuitAgent’s product tagline is “every RFP you win makes the next one easier.” That tagline is defensible only if the last stage — the post-mortem — writes its output back into the corpus the first five stages draw from. If it doesn’t, the claim is marketing copy.

The eight stages are in order because each one depends on the one before it. Compliance depends on intake. Draft depends on capture. Review depends on draft. Submit depends on review. Post-mortem depends on everything. Break the chain at any point and the proposal gets shipped; break the chain and ship enough of them and the function never improves.

Where we go next

Two pieces land in the next 20 days that go deeper on specific stages:

And the “Reading an RFP” series, which goes stage-by-stage through Intake and Bid/No-Bid, runs weekly through May.

Sources

  1. 1. Shipley Proposal Guide (7th ed.), Shipley Associates
  2. 2. APMP Body of Knowledge — Capture Planning
  3. 3. VisibleThread — Government proposal writing: key steps, challenges, and tips for success
  4. 4. Quilt — How to identify bottlenecks in your RFP process
  5. 5. Qorus — Winning proposals: how to stop wrangling SMEs
  6. 6. PropLibrary — Proposal win themes: the good, the bad, and six examples

See grounded retrieval in the product.

Start a trial workspace and watch PursuitAgent draft cited answers from the documents you provide.