Field notes

Reviews watch: what G2 and Capterra said this week

Five reviews from G2 and Capterra worth reading if you're shopping the proposal-software category. Loopio, Responsive, QorusDocs, Upland Qvidian — the patterns that recur.

PursuitAgent 6 min read Research

A monthly habit we’ve decided to adopt: reading what practitioners are saying on G2 and Capterra about the major incumbents in the proposal-software category, and posting a short summary of what surfaces. This is not a takedown post. It is a reading list, with commentary, drawn from publicly available reviews. Where the review is interesting, we say why. Where the review reflects a pattern we’ve seen before, we name the pattern.

We are also publishing this post under the company byline rather than a research byline, because the work involved is short and the commentary is interpretive rather than analytical. The research team will pick up the same beat in a more rigorous quarterly format.

Five reviews this week, drawn from the four vendors with the largest installed bases.

Loopio — “Magic doesn’t work well”

Capterra Loopio reviews and the AutoRFP summary — both surface the same recurring complaint, in the same words across multiple reviews: Loopio’s “Magic” AI feature works on the basic questions but breaks on the nuanced ones. The reviewer pattern: users start enthusiastic, find the AI usable for security questionnaire boilerplate, then encounter a complex question, get a wrong answer, and end up re-editing most of what Magic produces.

The pattern: AI features that are bolted onto a 10-year-old retrieval engine. The retrieval is doing keyword matching against a content library; the AI is wrapping the retrieved content in generated prose. When the retrieval misses, the prose is confidently wrong. The user can’t tell from the output that the retrieval was the failure; they read it as the AI hallucinating.

Why this matters: it is the load-bearing complaint about category leaders. The complaint is not “AI is bad.” The complaint is “this specific implementation of AI gives me wrong answers and I can’t tell when.” Practitioners have learned to distrust the feature.

Responsive — “the search is terrible”

G2 Responsive reviews recurringly use the exact phrase “the search is terrible.” The reviewer description: keyword matching that “constantly misidentifies what I’m searching for and shows completely unrelated results.” The newer version of the product, rolled out in the last 12–18 months, is described in the pros-and-cons view as having made the experience “LESS intuitive and buggy” — capitalization in the original.

The pattern: legacy CMS architecture. Responsive’s underlying retrieval was built on keyword indexing in a pre-semantic-search era. The product has been incrementally updated but the retrieval substrate is the same. Semantic retrieval is the industry default in 2025 for this kind of corpus; running keyword search against a proposal library is a 2018 experience surfaced in 2025 packaging.

The “version rollout made it less intuitive” complaint is interesting separately. It is the signature of a product team adding features to a creaking foundation. Each release adds more surface area, and each release surfaces the limits of the underlying architecture more visibly to the end user. Reviewers register this as the product getting worse, even when individual features are improving.

QorusDocs — “very slow”

Capterra QorusDocs reviews recurringly describe the product as “very slow” — long waits to preview files, view the cart, and use routine functionality. A separate complaint: the dashboard caps at 10 pursuits, which prevents teams above a certain size from seeing their full pipeline at a glance.

The pattern: a product whose performance constraints date from an earlier era of expectations. A 10-pursuit dashboard cap is a UX choice that pre-dates the dashboard tooling that is now standard everywhere else. A free Trello board doesn’t cap at 10 cards.

Slowness in a tool the team uses every day compounds. A 5-second delay on a “preview file” action, hit 30 times in a workday, costs each user 2.5 minutes per day, or roughly 10 hours per year per user. For a 20-person team, that is 200 user-hours per year — one engineer-month — spent waiting on file previews.

Upland Qvidian — “could be more modern”

G2 Upland Qvidian reviews describe the UI as something whose look “could be more modern,” with secondary complaints about AI performance, slow page loads, and price relative to value. New users have trouble fully utilizing the product — onboarding curve.

The pattern: the polite-phrasing tell. “Could be more modern” is what reviewers write when they don’t want to be harsh but the product looks like it was designed in 2014. The accumulated layer cake of UI updates without a re-architecture leaves the experience inconsistent: parts of the product look 2014, parts look 2018, parts look 2024.

The “trouble fully utilizing it” complaint is downstream. A consistent product is learnable. An inconsistent product requires the user to re-learn each section. New users churn before they fully utilize, which puts pressure on customer success to spend hours getting them to value, which raises the cost-to-serve, which the vendor recovers in the contract price.

What recurs across all four

The same three words show up across all four vendors’ review pages: “slow,” “expensive,” “overpriced.” The fourth word, used most often about Loopio’s “Magic” but appearing across the others, is “wrong” — as in, the AI produced a wrong answer.

We do not interpret this as practitioners being uniformly cynical. The reviewers writing these complaints are domain experts using a tool they paid for and depend on. Their complaints are operational, specific, and recurring. When the same complaints appear across four vendors over multiple years, the issue is not a single bad release; it is the architecture the category has settled into.

The architecture: a content library, a keyword search on top of it, AI features bolted on in the last 18 months without re-architecting the retrieval substrate, a high-touch sales motion to compensate for the product’s inability to onboard cleanly, and a contract price that absorbs the cost-to-serve.

We are not pretending we are immune to writing the same thing about ourselves a year from now. If we do, we hope the reviewers say so plainly, and we will read this post on G2 then with appropriate humility.

How to read a review page

A small habit, since the post is about reviews: when you read a vendor’s G2 page, sort by recency, not by helpfulness. The “most helpful” sort surfaces the most-upvoted reviews, which are often the oldest. The most-recent sort surfaces what current users are saying about the current version of the product. Loopio’s most-helpful reviews from 2022 are different from its most-recent reviews from 2025, and the difference is informative.

A second habit: read the four-star reviews more carefully than the five-star or one-star reviews. Five-star reviews are sometimes vendor-incentivized; one-star reviews sometimes reflect a specific bad experience that doesn’t generalize. Four-star reviews tend to be from satisfied customers who name a specific gripe — and the specific gripe is usually the most useful signal in the review page.

We will publish this post once a month. If you are shopping the category, the source pages above are worth reading directly. If you are not shopping the category, the patterns are still informative — they are the patterns that defined what we built PursuitAgent to be different from.

Methodology

Reviews sampled on 2025-05-31 from the G2 and Capterra product pages linked above. We are not publishing individual review IDs; this sweep is directional, not a statistical sample. Verbatim quotations in this post appear as they do on the public review pages; phrase-recurrence observations reflect shared vocabulary across multiple reviewers rather than counts drawn from a structured extraction.

Sources

  1. 1. Capterra — Loopio reviews
  2. 2. AutoRFP — Loopio review summary (quoting G2/Capterra)
  3. 3. G2 — Responsive (formerly RFPIO) reviews
  4. 4. G2 — Responsive pros and cons
  5. 5. Capterra — QorusDocs reviews
  6. 6. G2 — Upland Qvidian reviews