The overpriced document repository trap
An opinion piece on why most RFP tools end up unused. The reviews tell a consistent story across Loopio, Responsive, and Qvidian: teams pay for AI features and end up using a search box. We have a theory about why.
This is an opinion piece. The opinion: most RFP tools end up as expensive document repositories with a clunky search box on top, and the AI features almost everyone pays extra for don’t change that.
I have been reading reviews of RFP tools for two years. Across Loopio, Responsive (formerly RFPIO), and Upland Qvidian, the same complaint structure shows up. The vendor sells AI-assisted answer generation. The customer buys it. Six months later, the customer is using the tool to search a content library they barely maintain. The AI feature is on by default and ignored in practice.
The reviews
Quoting the autorfp.ai summary of Loopio reviews, which pulls verbatim from G2 and Capterra: “Magic doesn’t work well. The answers are usually wrong.” The tool’s “Magic” answer-suggestion feature works on basic questions and fails on nuanced ones. Users end up re-editing most suggestions. Once the content library degrades — and it always does — the suggestions get worse, the tool becomes “an overpriced document repository.”
Responsive’s G2 reviews repeat the structure with different proper nouns. “The search is terrible. It constantly misidentifies what I’m searching for.” The complaint is keyword-match search behaving like keyword-match search in a world where users expect semantic retrieval. A user wrote that the AI “pales in comparison to basic ad-hoc GenAI” — meaning a sales engineer is getting better answers by pasting the question into ChatGPT than by using the expensive tool they pay for.
Qvidian’s G2 reviews are tonally different but structurally identical. UI feels dated, AI is inadequate, the tool is slow, the price is high. New users have trouble getting value out of it.
The 1up.ai team has written about this directly: RFP tools are “mostly just knowledge management” with bloat that leaves users “getting lost.” That is from a competing vendor, but you can verify the underlying claim against the review sites yourself.
My theory about why
The mistake every incumbent made: they built AI features on top of a content library they assumed would be maintained. The assumption was wrong, in a specific and predictable way.
Content libraries rot. They rot because nobody owns them. They rot because the SMEs who would refresh them are billable, and refreshing the library does not bill. They rot because nobody flags stale content until somebody ships a stale answer to an evaluator and the team loses a deal they should have won.
A library that rots is not a library, it is a hazard. AI on top of a hazardous library produces hazardous output. The AI feature surfaces a 14-month-old answer about an audit period that ended; the drafter ships it; the deal is lost; the team blames the AI.
The category response, so far, has been to add more AI features. That doesn’t fix the underlying rot. It accelerates the failure.
What the alternative looks like
I think the alternative is uncomfortable for incumbents and unfamiliar to buyers. The product has to put discipline on the content layer, not just intelligence on top of it. Specifically:
Freshness as a first-class concern. Every block has a last-verified date. Blocks that reference time-bounded facts (audit periods, retention windows, certification levels) get an automatic flag when the time bound is approaching. The product drives the SME to review the block before it goes stale, not after a deal is lost. We covered the implementation in the shipped freshness scores changelog and the shipped freshness alerts changelog.
Ownership per block. Not per category. Per block. The KB knows which human is responsible for which paragraph. When the SME leaves the company, the orphaned blocks are visible. The vendor’s job is to make ownership obvious, not to assume it exists.
Citations that prove themselves. When the AI drafts an answer, every claim points to the source span. The reviewer clicks the citation and sees the underlying text. If the cited text does not actually support the claim, the system refuses to draft instead of hallucinating. We covered this in the grounded-AI pledge in code.
Usage telemetry as content health. Blocks that get used 10 times in a quarter are high-confidence; blocks that have not been touched in two years are suspicious. The product surfaces this. The KB owner does not have to guess.
None of these are AI features. They are workflow features that make the AI features stop being a hazard. The incumbents could ship them. They have not, because the AI features are easier to demo and the content discipline is harder to demo and the buyer’s procurement evaluation values demo-ability.
What I think the reviews are really saying
Read the reviews of Loopio and Responsive again with this lens. The complaints are not actually about the AI. They are about the content layer the AI runs on. The AI gets blamed because the AI is the visible layer. The library is the layer that broke.
The category gets unstuck by acknowledging that. We are betting PursuitAgent on it. The bet might be wrong. If the incumbents fix the content layer in the next 18 months, our differentiation collapses. So far, the trajectory of their releases suggests they will keep adding AI features instead. We will see.
Either way: a tool you pay 1,500 dollars per seat per year for should not be an overpriced document repository. The reviewers calling it that are not wrong. They are telling you what they actually use the tool for, and the tool is not delivering what was sold.