Field notes

Reviews watch: what G2 and Capterra said in March

The monthly reviews aggregation. Two incumbents took notable hits on recent feature-release quality; one sub-category showed consistent positive movement. Links, quotes where they clarify the trend, and no speculation beyond the data.

The PursuitAgent research team 4 min read Research

The March aggregation of proposal-software reviews across G2 and Capterra. Two incumbents took notable hits on recent feature-release quality; one sub-category (DDQ-specific tools) showed consistent positive movement. Methodology and methodology gaps at the end.

What moved

Responsive (G2)

The post-release reviews on Responsive’s late-February update cluster around two themes:

  • UI regressions on content-library search. Multiple reviewers report that search results quality worsened after the update. The “the search is terrible” meme that has been recurring in Responsive reviews since at least 2024 reappeared with increased volume. Our read: this looks like an architectural change that didn’t land well, not incremental drift.
  • AI feature output quality. Reviewers flagged that the AI-drafted answers feature is “less useful than basic ad-hoc GenAI” — a phrasing that has appeared recurrently in Responsive reviews and now shows up adjacent to the February update.

No single review is a story. The cluster is the story.

Loopio (Capterra + G2)

Loopio’s March reviews continue the Magic-feature critique pattern we’ve cataloged across previous sweeps. Two specific observations from March:

  • A handful of reviewers explicitly contrast Loopio’s AI output with the category’s grounded-AI-focused entrants, using phrases like “needs my content library to be perfectly maintained” and “would rather write from scratch.” This is consistent with the content-freshness theme that has been dominant in Loopio reviews for 18 months.
  • Several positive reviews anchor on Loopio’s implementation support and customer success — not on product features. The pattern we’ve noted before: the product isn’t what’s being praised; the service around it is.

QorusDocs (Capterra)

Fewer new reviews than the prior two, but the ones that landed in March were mixed. The 10-pursuit dashboard cap and the “very slow” performance notes from the prior year continue to surface. One reviewer specifically called out that a recent release improved a specific search flow they’d previously complained about — a rare positive delta.

Upland Qvidian (G2)

Qvidian’s review volume remains low, which is itself signal. The legacy-product reception that has characterized Qvidian for years — “inadequate AI performance, slow, expensive” — is consistent. No notable March movement in either direction.

DDQ-specific tools — positive movement

A subset of the category — tools focused specifically on DDQs and security questionnaires rather than the full RFP surface — saw a consistent positive slope in March reviews. Reviewers in this segment cited:

  • Speed of response turnaround when the tool is actually used by the security team (not the proposal team).
  • Specific integrations with GRC platforms (OneTrust, Whistic).
  • The narrower surface area means fewer abandoned features to complain about.

We’re not naming specific products here because the positive movement is cross-tool — it looks like a category-level signal, not a vendor-level one. The segment as a whole is moving, which may be worth a proper teardown in a future post.

What we’re watching for April

Three things we’ll check in the April sweep:

  • Whether Responsive patches the search regressions or whether the pattern sticks. A two-month persistence flips a “release blip” into a “product trajectory.”
  • Whether Loopio’s product marketing responds to the grounded-AI comparison thread in their public messaging. Category incumbents have a 12–18 month pattern of re-labeling existing features in response to competitive pressure; an early signal would show up in March/April.
  • Whether the DDQ-specific tool positive slope extends, or whether it was driven by a cluster of onboarding reviews that won’t repeat.

Methodology and gaps

What we include. Public reviews on G2 and Capterra in the calendar month, filtered to the product pages above. We read every review posted in the month; we don’t cherry-pick.

What we exclude. Aggregator snippets without traceable source reviews. Reviews that appear to be solicited through incentive programs and don’t label themselves as such (both G2 and Capterra disclose incentivized reviews when the reviewer confirms; those do get included with an asterisk in our internal tracking but not in the prose above).

The known gap. We don’t have structured access to TrustRadius reviews or to Gartner Peer Insights — both are paywalled or require an account in good standing. Our view of the category is G2 + Capterra, which misses some enterprise-heavy voices that concentrate on TrustRadius. If we can open a reader account by next month’s sweep, we will; otherwise we’ll continue to note the gap.

What the trends are not. A single month of reviews is not a statistically rigorous sample. The patterns above are directional. A full quarterly report with volumes and sentiment distribution lands in April alongside the Q1 category update.

Sample date. Reviews sampled on 2026-03-28. We are not publishing individual review IDs or a reproducible query; the sweep is directional, not a statistical sample.

The takeaway

Two incumbents ran a rough March on visible product metrics. One sub-category is building momentum. The patterns aren’t new, but March’s data kept them moving in the same direction.

Sources

  1. 1. Capterra — Loopio
  2. 2. G2 — Responsive (formerly RFPIO)
  3. 3. G2 — Upland Qvidian
  4. 4. Capterra — QorusDocs
  5. 5. Reviews weekly sweep — September