Field notes

Reviews watch: what G2 and Capterra said in September

Monthly aggregation of public review activity across the major proposal-management vendors. Four competitor deltas worth noting from September 2025.

The PursuitAgent research team 4 min read Research

This is the recurring monthly sweep of public review activity across the four most-reviewed proposal-management vendors. We pull recurring themes from the most recent G2 and Capterra reviews and note where the public sentiment has shifted from the prior month. We do not aggregate ratings — we report on what reviewers are writing about.

Four deltas worth noting from September 2025.

1. Loopio — “the magic is still off”

The recurring complaint about Loopio’s “Magic” auto-suggest feature continues. The pattern across Capterra reviews and the aggregated commentary on AutoRFP’s review summary lands on the same point: the AI suggestions work for basic, repeated questions and break on nuanced content. Reviewers describe re-editing most suggestions to the point that the suggestion isn’t saving meaningful time.

The structural reason — the one we have written about in Content libraries vs. knowledge bases and in the Loopio teardown — is that the content library degrades faster than the AI layer can adapt. When the library falls behind, the magic gets worse, not better. Reviewers who pay 1,700 dollars per seat per month for a tool that surfaces stale answers describe it as “an overpriced document repository.”

No new feature announcements from Loopio this month that change the picture. The complaint pattern is stable.

2. Responsive — UX complaints intensifying

G2 reviews of Responsive continue to describe the search as the primary friction point — “constantly misidentifies what I’m searching for and shows completely unrelated results.” The newer pattern in September: reviewers comparing Responsive’s AI suggestions unfavorably to using ChatGPT directly, calling the in-product AI experience “less intuitive and buggy” than ad-hoc generative AI.

The structural read is that Responsive’s AI feature is grafted onto a keyword-search retrieval layer that wasn’t designed for semantic queries. The mismatch produces the surface-level complaint about search. We covered the underlying mechanism in Hybrid search: dense plus sparse — the difference between keyword-match retrieval and semantic retrieval is large in this category and is visible in the review text.

3. Qorus — pace complaint continues

Capterra reviews of Qorus continue to flag the product as “very slow” — long waits to preview files, view the cart, use the dashboard. The hard-cap dashboard limit (10 pursuits visible at once) is mentioned in two new September reviews as a workflow constraint that prevents teams from seeing the full pipeline. Content-search results are described as surfacing less-relevant matches, with reviewers manually triaging.

No release notes from Qorus this month that address the latency or the cap. The complaints are stable.

4. Upland Qvidian — modernization slow

G2 reviews of Qvidian describe the UI as needing modernization, AI performance as inadequate, and the price as high relative to the experience. New users describe difficulty fully utilizing the feature surface — a sign of accumulated UX debt. We have written about this category-wide pattern in The legacy RFP UI is the moat.

September brought no major release notes from Qvidian. The complaints are consistent with prior months.

Cross-vendor pattern: stale content libraries are the single most-cited complaint

Across all four vendors, the most frequently surfaced complaint in September reviews is variation on the same theme: the content library degrades faster than the team can maintain it, and the AI features grafted on top of the library degrade with it. This is the structural complaint we have been writing about all year — it shows up in Content libraries vs. knowledge bases and in the freshness pattern we shipped in March.

The September reading worth flagging: across the G2 and Capterra feeds we read this month for the four vendors, staleness, accuracy, and maintenance-burden complaints were the single most visible category. We are not publishing a precise prevalence number — the sample is not drawn from an auditable extraction — but the pattern is consistent with prior monthly sweeps and continues to dominate the complaint surface.

What we are watching for next month

Three signals worth tracking through October.

  • Whether any of the four vendors ships an explicit grounding-and-citation feature in their AI surface. The category is overdue for one. None has shipped it yet.
  • Whether the recurring “outdated content library” complaint surfaces in any vendor’s roadmap notes or webinar content. The complaint is the most consistent across all four products and the public roadmap silence is itself a signal.
  • Whether new entrants — autonomous-RFP startups raising in 2025 and 2025 — start showing up in G2 review activity. Most are still under the threshold for review volume.

We will publish the next sweep at the end of October. As always, this post is a synthesis of public review text, not original research; if you have private deployment data points worth comparing against the public picture, the email is on the research team’s page.

Methodology

Reviews sampled on 2025-09-20 from the G2 and Capterra product pages linked above. We are not publishing individual review IDs; the sweep is directional, not a statistical sample. The phrase-recurrence observations above reflect shared vocabulary we see across multiple reviewers, not counts drawn from a structured extraction.

Sources

  1. 1. Capterra — Loopio reviews
  2. 2. G2 — Responsive (formerly RFPIO)
  3. 3. Capterra — Qorus for Proposal Management
  4. 4. G2 — Upland Qvidian
  5. 5. AutoRFP — Loopio reviews summary