What 'compounding' means for proposal software
PursuitAgent's tagline is 'every RFP you win makes the next one easier.' What that actually requires: four mechanisms of compounding, why most AI tools aren't compounding tools, and questions to ask a vendor.
PursuitAgent’s tagline is “every RFP you win makes the next one easier.” I have written that sentence a few dozen times across the last year — on the homepage, in pitch decks, on Slack introductions with prospective customers. The sentence is load-bearing for the product. It is also the sentence I have spent the most time defending, because it makes a specific claim that a lot of proposal software tacitly rejects.
This post unpacks what the claim actually means, what the product has to do to make it true, and why most of the “AI proposal” tools currently competing for attention in the category are not compounding tools. They are stateless drafting tools with a logo on top. The distinction matters to any team buying this kind of software in 2026, and it especially matters to any team that has watched an expensive RFP tool decay into “an overpriced document repository” over a renewal cycle — a framing I did not invent, which comes from autorfp.ai’s synthesis of Loopio reviews and which has surfaced in almost every buyer conversation I have had this year.
The tagline, taken literally
If I take the tagline at face value, I have to answer a specific question. What would it mean for an RFP you just won to make the next one literally easier? Not easier because the team is more experienced, not easier because the market got friendlier, not easier because the relationship with the buyer matured. Easier because the software you are using has changed between Bid 1 and Bid 2 as a direct consequence of Bid 1.
If the answer is nothing — if Bid 2 starts with the same empty KB search box, the same unfiltered content library, the same generic prompts, the same unclassified questions — then the software did not compound anything. Bid 2 is easier only because the humans learned something, and the humans could have learned that with a word processor.
The interesting bar is the opposite. The software should know more after Bid 1 than it did before. The KB should contain answers it did not contain before. The win themes used in Bid 1 should be tagged as winning themes. The style of the winning sections should be recognizable to the drafting prompts. The buyer-specific intelligence captured during Bid 1’s capture work should be available to Bid 2 without anybody reloading it. Each of those things is a mechanism. A product that implements them compounds. A product that does not, does not.
There are four of these mechanisms, and I think every serious proposal tool will have to implement all four eventually. This post walks through each one — what it is, what makes it hard, and what a buyer should look for to know whether a vendor is doing it or just talking about it.
The four mechanisms of compounding
KB compounding — every approved answer becomes a citable block
The most obvious mechanism, and the one most tools partially implement. When a proposal ships and a specific answer has been reviewed, approved, and sent to a buyer, that answer should enter the knowledge base as a citable block, tagged with its context — which question it answered, which bid it shipped in, which SME approved it, what the buyer’s response was. The next bid that touches a similar question starts from this approved block, not from a blank retrieval.
Every RFP-response tool in the category claims this. Few implement it well. The failure modes are consistent.
First, the approved answer goes into a content library without provenance. The library holds thousands of answers, but you cannot tell which answers have been approved, which shipped, which won, which lost, and which were drafted but never submitted. The library’s volume is a proxy for rigor; the rigor is not actually present. Sparrow Genie’s critique of content libraries names this directly — teams wind up with PDFs written for different verticals, Google Docs that have not been touched in eight months, and boilerplate that contradicts the current capabilities of the business.
Second, the library grows faster than it can be curated. Shelf’s analysis of outdated KBs makes the point that an unmaintained knowledge base actively undermines user trust; proposal libraries are knowledge bases with an extra audience — the buyer’s evaluator — whose trust is the whole point. An answer from 2024 surfaced in a 2025 response is worse than no answer at all.
Third, the write-back is manual. A proposal ships, the team moves to the next bid, and the winning answers do not enter the library unless somebody sits down after the submission and does the work. That somebody usually does not exist, or exists part-time, and the library’s freshness curve starts bending down the day the bid ships.
What real KB compounding looks like, in product terms, is: every answer approved in a shipped proposal enters the KB as a first-class block with its context, its approvers, its version history, and an automatic freshness decay that flags it for re-review at a defined cadence. The write-back is not an end-of-week ritual; it is part of the submission workflow. A bid does not ship without its answers being tagged for KB entry or explicit archival. Miss this, and every downstream mechanism breaks.
There is a second, subtler property. A compounding KB does not just accumulate answers; it accumulates the relationships between answers. When two answers for two different buyers converge on the same underlying claim, the KB should notice and surface the convergence — because the claim is more durable when two separate bids approved it than when one did. When a new answer contradicts an existing answer, the KB should flag the contradiction for human resolution rather than quietly adding a second version that competes with the first at retrieval time. The graph structure matters. A bag of blocks with no relationships between them is a library that will collapse into retrieval chaos by block number 800.
Theme compounding — themes that worked get promoted, themes that did not get retired
The second mechanism is more subtle. Win themes — the three to five discriminating messages that run through a proposal — are not static assets. Some themes win; some themes sound good in a draft and are never echoed by a buyer in a debrief; some themes were right a year ago and are table stakes now. A tool that compounds on themes tracks which themes appeared in which bids, correlates them against outcomes, and surfaces the correlations to the drafting function.
A proposal function without this mechanism uses themes like slogans — the same ones, year after year, whether or not the evidence supports them. Our win themes field guide walks through the specific discriminatory tests themes should clear; the compounding layer is the longitudinal piece that tells you whether the theme has held up over twenty bids, not just whether it felt right on the one you are writing.
The hard part is capturing the outcome signal cleanly. A win is sometimes driven by a theme; more often it is driven by the relationship, the price, the incumbent’s weakness, the buyer’s internal politics. Attributing wins to themes is noisy. The tool that compounds on this has to present the correlations honestly, not as a causal claim. A theme that ran in 14 wins and 11 losses is a theme that probably is not doing the work. A theme that ran in 9 wins and 1 loss is a theme worth promoting — as long as the sample is big enough not to be coincidence.
The write-back mechanic matters here too. A post-mortem that does not update the theme tags is a post-mortem that does not move the theme library. Qorus’s piece on SME bottlenecks is mostly about collaboration, but the deeper implication is that the teams who own the themes — sales, marketing, product — are the same teams whose attention the proposal function competes for. The write-back has to happen without their active participation, or it does not happen.
Style compounding — the system learns your house voice from past wins
The third mechanism is the one most-discussed and least-understood. A compounding tool should, over time, learn the stylistic signature of the team’s winning proposals — the sentence lengths, the tone, the specific vocabulary, the balance of narrative and data. A new draft should inherit that signature automatically, not require a proposal writer to re-teach the model every session.
Every AI-drafting tool in the category gestures at this. The genuine version requires more than a few in-context examples. It requires a house-style model that has been tuned on, or systematically prompted with, the corpus of previously-shipped winning content — and updated as the team’s style evolves.
The hard part is that style is entangled with quality. A team that is consistently losing is a team whose “house style” should not be reproduced uncritically; the tool that learns that style compounds the team’s failure mode. The answer is not to stop the mechanism; it is to weight the learning toward winning proposals, not all proposals, and to be explicit with the user about what is being learned and from what.
The failure mode of a tool that does not compound on style is that every draft feels slightly off. The proposal writer re-does the rhythm of every sentence, the tone of every section, the positioning of every claim. The tool has drafted the right content in the wrong voice, and the cost of the rewrite is nearly the cost of writing from scratch.
Capture compounding — buyer-specific intelligence accumulates per-account
The fourth mechanism is the one almost nobody in the category does. Capture work — the two-to-eight-week period between bid decision and drafting — produces rich intelligence about a specific buyer: their evaluation panel, their scoring rubric, their unstated priorities, their known vendor shortlist, their procurement-side constraints. In most proposal functions, this intelligence lives in one person’s head or in a chain of Slack messages that evaporate when the bid ships.
A compounding tool captures this intelligence as structured data, attached to the buyer account. The next time the buyer issues an RFP — a modification, a renewal, a related procurement from the same agency — the intelligence is available to the capture lead immediately. The rubric weights that drove the previous bid’s scoring are visible. The evaluator preferences inferred from the last submission’s feedback are surfaced. The buyer’s history with your incumbent competitors is retained.
This mechanism is what we mean when we say proposal software should operate on an account graph, not just a document library. A document library is flat: files, searchable, undifferentiated by who they relate to. An account graph is structured: intelligence is attached to the accounts it came from, the bids it drove, the themes it shaped. The eight-stage pipeline has its capture-stage output land in this graph; without the graph, capture becomes a stage the function performs and then erases.
Capture compounding is the mechanism most directly responsible for the tagline. If winning Bid 1 from a specific buyer makes Bid 2 from the same buyer literally easier — because the system remembers what worked, what didn’t, and what the evaluation panel cared about — then the product has earned the claim. If it doesn’t, the claim is aspirational.
Why most tools don’t compound
The four mechanisms above sound obvious when laid out. The observable reality of the category is that most tools implement zero or one of them well and the rest poorly or not at all. The explanation is architectural, not motivational.
Most AI-drafting tools were built as stateless drafting tools. The model gets prompted, the draft comes out, the cycle ends. There is no persistent, structured, account-bound state that the next draft session inherits. The tool is an assistant, not a memory. Assistants do not compound; memories do.
Content libraries are built as flat document stores. They are searchable, which makes them feel smart; they are not structured, which makes them rot. Loopio review synthesis captures what happens when the library rots: the expensive tool becomes a document repository, the AI suggestions become useless, the team’s confidence in the tool collapses, the renewal conversation gets painful. The library is flat by design; the rot is structural.
Post-mortems do not write back, because write-back is hard. It requires agreement on which fields to update, which themes to promote, which answers to deprecate — and the agreement requires human attention during the moment right after a bid ships, when human attention is scarcest. Most tools do not build for this moment because the tool vendor has the same incentive the proposal team does, which is to move on.
KB ownership does not survive employee churn. The SME who built the initial 400 blocks leaves; the new SME does not know what the old SME knew about which blocks are still accurate. Capterra’s Qorus reviews surface exactly this pattern — the library’s content searches pull less-relevant results over time, and the teams attributing the problem to the tool are partially right and partially catching their own organizational churn.
The net result is that a procurement buyer evaluating tools in 2025 or 2026 is often looking at five tools that claim compounding features and zero tools that can pass a hard test of compounding. The hard test is the question in the next section.
What this means for buyer evaluation
If I am sitting on the buyer side of a procurement process for proposal software, here are the questions I ask. They are designed to separate real compounding from marketing compounding.
Show me what the system knows about a specific account after 10 bids that it did not know after 1. Not “show me the account view.” Show me the difference. What fields populated? What themes got tagged? What rubric weights got inferred? What evaluator preferences got captured? A vendor who cannot answer this concretely is a vendor whose account graph is either not structured or not populated.
Show me a KB block with a version history going back 18 months, with the bids it shipped in, the approvers, and the outcomes. If the KB tracks blocks as blocks but not as histories, the compounding claim is shallow. The history is where the learning lives.
Show me how a post-mortem updates the system. Not “does the system support post-mortems” — any tool can have a post-mortem form. Show me the mechanism by which post-mortem output changes the KB, the theme tags, the style model, or the account graph. If the mechanism does not exist, the system will not compound across the cycle where compounding is supposed to happen.
Show me what happens when a new proposal writer joins the team. A compounding system should shorten that onboarding materially, because the system carries the institutional knowledge that used to live in individual heads. If the answer is “they read old proposals for a week,” the system has not absorbed the institutional knowledge; the humans still are.
Show me the freshness curve of the KB across a year. A compounding KB is one where the average block age is stable or decreasing over time, because freshness is a product feature. A library where average block age only increases is a library where rot is winning.
These questions are harder to answer than most vendor demos are set up for. That is the point. The tagline “every RFP you win makes the next one easier” is a bar, and the questions are the tests the bar implies. If the vendor cannot answer them, the claim is advertising.
Closing: compounding is the edge
I opened with the tagline because it is the thing PursuitAgent is betting on. The product’s architecture is a specific answer to the four mechanisms above. The KB writes back on every shipped proposal. Themes get tagged by outcome and surface to the drafter on the next bid. The house-style model is trained on the team’s winning corpus. The account graph stores capture intelligence per-buyer, and the next bid from the same buyer starts with that intelligence pre-loaded. Whether we have gotten each of these right in year one is a separate question — the year-one regressions post this month is honest about the parts we have not yet. But the architectural commitment to the four mechanisms is what I think separates a compounding proposal tool from the rest of the category.
The broader point is not about PursuitAgent specifically. The broader point is that proposal software that does not compound is proposal software that will look more and more like a productivity add-on and less and less like a strategic system. Productivity add-ons are commodity; strategic systems are defensible. The category is going to bifurcate in 2026 and 2028 along this line. The vendors who invest into the four mechanisms will separate from the vendors who do not.
The buyer-side implication is that the next procurement cycle should include the questions above. A tool that cannot answer them will cost the same as a tool that can, and will do less of the work that matters across the five-year horizon most proposal shops operate on.
The internal-operator implication — for anyone inside a proposal function right now — is that the compounding work is not primarily a tool choice. It is a set of rituals: the post-mortem that writes back, the theme audit that retires the generics, the SME-collaboration patterns that feed the KB without breaking the SMEs. SME collaboration reconsidered and the eight-stage pipeline cover the ritual side of this argument in more detail. The tool can accelerate the rituals; it cannot replace them. A team with the rituals and a mediocre tool will out-compound a team with no rituals and the best tool on the market.
Which is maybe the most honest version of the tagline I can write. “Every RFP you win makes the next one easier” — if the team does the work to let it, and if the tool is architected to let that work land somewhere durable. Both conditions. Neither alone.
— Bo
Sources
- 1. Autorfp — Loopio reviews summary
- 2. Capterra — Qorus for proposal management
- 3. Sparrow Genie — RFP content library best practices
- 4. Shelf — Outdated knowledge base
- 5. Qorus — Winning proposals: stop wrangling SMEs
- 6. PursuitAgent — The 8-stage RFP pipeline
- 7. PursuitAgent — Win themes field guide
- 8. PursuitAgent — SME collaboration reconsidered
See grounded retrieval in the product.
Start a trial workspace and watch PursuitAgent draft cited answers from the documents you provide.