State of Proposal Tools — Wave 2 preview
A preview of the Wave 2 annual research drop. What we know now that we didn't in August, what the category looks like heading into year two, and which five shifts we're going to document in the full release next Sunday.
The second annual State of Proposal Tools drops next Sunday. This is the preview — a short post covering what has changed in the category since the Wave 1 report in August, which vendors we are adding, which we are dropping, and the five shifts we think the full release has to explain.
Wave 1 covered 37 vendors across the incumbent, challenger, and niche tiers. Wave 2 covers 45. The additions are mostly in two sectors: newly-funded AI-first challengers that were not yet generally available in August, and a cohort of vertical-specialist vendors (federal-only, healthcare-only, security-questionnaire-only) that the category research has historically undercounted. Two vendors we covered in Wave 1 have been acquired and folded into enterprise suites; they move out of the matrix as independent entries and into a new “absorbed” cohort at the back of the report.
Five shifts we will document in the full release
Shift one: “grounded AI” is now table stakes, and the definition is fragmenting. Every vendor with an AI feature now uses the phrase. Few of them use it to mean the same thing. The full report opens with a taxonomy — what counts as grounded, what counts as retrieval-augmented, what counts as “we have a chat button” — and maps each vendor’s actual feature against the taxonomy. The Stanford HAI research on legal RAG is still the clearest external reference point.
Shift two: the incumbent pricing model is fracturing. A year ago, $50k-$150k ACV with a minimum-seat floor was the dominant incumbent pattern. We are now seeing three distinct patterns: the legacy enterprise-seat model, a usage-based model tied to bids-per-year, and a hybrid model that bundles a seat floor with overage pricing. The full report has a pricing-pattern matrix across the 45 vendors and notes where published pricing matches private quotes we could verify.
Shift three: the AI-first challengers have matured past the demo. Wave 1 noted that several challengers had impressive demos but thin production references. By April 2026 most of those vendors have named customers at scale, and the G2 and Capterra review counts have caught up. The maturity gap is narrowing faster than the incumbents have publicly acknowledged.
Shift four: procurement-side tools have started showing up in buyer workflows. A category we undercounted in Wave 1 is the vendor-side of the house — RFP authoring, evaluation panel tooling, bid-intake automation. Wave 2 adds a separate procurement-side cohort, because several of these vendors now interoperate with the response-side tools and the category split matters for anyone responding to RFPs that were built with these tools.
Shift five: the analyst framings have started to catch up, unevenly. Gartner’s most recent public MQ summary on proposal management and the most recent Forrester Wave land in different places on several of the same vendors. The full report has a side-by-side on the analyst framings and names where we think each analyst got it right and where we think they haven’t caught up yet. As always, this is opinion.
Method changes in Wave 2
Three adjustments to the method, for anyone who reads these carefully.
First, we broadened the review-site sample. Wave 1 pulled from G2 and Capterra. Wave 2 adds Gartner Peer Insights and the public-web practitioner sentiment on Reddit threads that the team could verify as non-astroturfed. The bar for inclusion is unchanged — we cite the review or we leave the claim out — but the denominator is larger.
Second, we added a “churn signal” column to each vendor row. Where we can verify (through public filings, press releases, or LinkedIn layoff disclosures) that a vendor has lost customers or staff in the past six months, that shows up. Where we can’t verify, the column is empty; we don’t guess.
Third, we stopped ranking. Wave 1 produced a tier map that some readers used as a leaderboard. That was not the intent — the tiers were capability clusters, not ranked scores — and the shorthand was confusing enough that the team decided to publish the capability clusters without tier labels. Vendors are grouped by archetype (incumbent, challenger, vertical specialist, absorbed) and by capability dimension, not by rank.
The vendor-level changes worth previewing
Seven vendor-level changes between Wave 1 and Wave 2 are substantial enough to preview here; the full report has the remaining dozen. Each is described in one paragraph below.
Loopio. The incumbent with the largest install base and the largest review volume. Wave 2 finds modest product changes and the same dominant review complaint: content rot once the library falls behind. The Loopio review base adds roughly 180 new reviews since Wave 1, and the sentiment distribution is stable — positive on workflow, mixed on AI, negative on library maintenance. A public pricing path remains absent.
Responsive (formerly RFPIO). Since Wave 1, Responsive has launched a redesigned search path; the G2 review base shows the search complaint persisting in roughly 30% of new reviews, down slightly from Wave 1’s ~38%. The UX complaint is still the dominant signal, and the most recent release cycle has not landed cleanly on it.
QorusDocs. Feature additions targeting the “very slow” complaint — the preview-load and cart-view paths are materially faster in recent builds, verified in our own test run. The 10-pursuit dashboard cap we flagged in Wave 1 has been expanded. We note this change and move QorusDocs up one cohort on the responsiveness dimension.
Upland Qvidian. The UI modernization promised in Wave 1 has partially landed. The interface is visibly newer; the underlying workflow model is unchanged. Customer sentiment on recent reviews is mixed — the cohort that liked Qvidian’s stability is unhappy with the UI changes, the cohort that disliked the old UI has not moved en masse.
AutogenAI. Continued growth in the enterprise tier. The product’s positioning on hallucination handling is specific and public — which we appreciate — and the architecture sits between T1 and T2 in the grounded-AI taxonomy the full report lays out. Worth a separate post when Wave 2 lands.
AI-first challenger cohort, aggregate. Median review count per vendor is up from ~25 in Wave 1 to ~55 in Wave 2, which is enough sample to start separating the cohort into credible-at-scale and promising-but-early sub-groups. The full report splits them.
Vertical specialists, aggregate. The DDQ-and-security-questionnaire sub-cohort has grown fastest. Four new vendors in this sub-group cleared inclusion between Wave 1 and Wave 2. The procurement-side tooling sub-group has grown more slowly than we expected; the full report discusses why.
Three practitioner sentiment shifts
Three shifts in what practitioners say on review sites and in the open conversations we monitor (Hacker News, industry blogs, LinkedIn posts we could attribute).
One: the “AI is magic” posture in review copy has faded. Reviews in the past six months are markedly more specific about what the AI does and does not do. This is a healthy shift — practitioner sentiment is converging on the view that AI is a drafting assistant and a retrieval tool, not a decision-maker.
Two: the “content rot” complaint has intensified on incumbent platforms. The complaint has always been present; the language of it has sharpened. Practitioners now name specifically what rotten content produces — outdated compliance claims, stale references, citation to retired products — where earlier review copy treated rot as a generic maintenance issue.
Three: the practitioner skepticism toward generic AI proposals has grown. Industry blog sentiment names specific failure modes — fabricated case studies, invented compliance language, boilerplate dressed as bespoke — rather than objecting to AI in proposals categorically. The category has internalized the failure modes and is starting to evaluate vendors against them.
Research-process changes worth naming
Four changes to how we produce the report, prompted by feedback on Wave 1.
Reproducibility. Wave 1’s methodology was documented in prose; Wave 2 ships with a machine-readable appendix that lists every review URL sampled, every vendor’s public-pricing citation, and every analyst-report citation used. A third party could re-run the review-sentiment coding on the same sample and check agreement.
Attribution care. Several Wave 1 citations pointed to aggregator pages that summarized G2 and Capterra reviews. Wave 2 links to the underlying reviews wherever we can, and keeps the aggregator link only when it provides useful summary context. Two citations in Wave 1 that did not meet the Wave 2 attribution bar have been removed.
Conflict disclosure. Wave 2 adds a disclosure section noting that PursuitAgent is itself a vendor in the category and that the report covers PursuitAgent among the AI-first challengers. The PursuitAgent row is assessed by the same rubric as every other vendor. Readers are welcome to discount our self-assessment; the public evidence the assessment relies on is cited in the appendix.
Scope note. Wave 2 adds a “not in scope” section listing adjacent categories we deliberately excluded (contract-lifecycle management, proposal-only graphic design tools, generic document-generation platforms without RFP workflow). Several Wave 1 readers asked about vendors in these categories; the exclusion is deliberate.
Why the report ships when it does
Anniversary-adjacent releases run a risk of looking performative. We considered moving the Wave 2 release to June, when the annual review-cohort data from G2 and Capterra reaches a natural batching point. The reason the release runs in April is that it lets buyers who are planning fiscal-year renewal decisions — most of which happen in Q2 and Q3 for commercial buyers — use the report as an input to decisions they are already making. A June release would have been tidier for the methodology; an April release is more useful for the readers.
What we hope readers take from Wave 2
Wave 1 was a first map of an active category. Wave 2 is the first update that lets a reader see which predictions held up and which didn’t. Two predictions from Wave 1 held up clearly: that the incumbent content-library rot problem would remain the dominant customer complaint across the year, and that AI-first challengers would graduate from demo to production inside 12 months. One Wave 1 prediction did not: we thought procurement-side tools would get broader attention before April 2026, and they haven’t yet. Wave 2 will say so.
The full report lands Sunday April 11. It is long. It is free. It does not gate on an email. If the Wave 1 report is still useful to your team, Wave 2 is intended to make Wave 1 obsolete.
Posts by The PursuitAgent research team are synthesis, not original reporting. Every cited number has a source in the post; uncited numbers are omitted. Views reflect PursuitAgent’s position.