Field notes

Feature parity is the wrong competitive goal

Chasing Loopio's feature list would kill us. Here's why we picked a different target — and the product we're building because of it.

Bo Bergstrom 5 min read Category

A founder I respect told me last month that we should ship “Loopio parity” by Q1 2026. He meant well. He’s wrong.

Feature parity is the wrong goal because the feature list is the wrong artifact. Loopio’s feature list is the residue of fifteen years of enterprise sales calls, RFP committees, and “we’d buy if you also did X” conversations. Most of it doesn’t work very well — that’s the persistent message in their G2 and Capterra reviews. The “Magic” AI feature, in particular, is a recurring complaint: “the answers are usually wrong” is the dominant theme. Their content libraries rot. Their search is keyword-matching in a semantic world. Building all of that, faster, doesn’t get us anywhere a customer wants to be.

This post is about what we’re building instead, and why.

Parity is a deferred bet

When a startup sets parity as a goal, what they’re really saying is: “we’ll let the incumbent define the problem space, and we’ll race them on execution within that space.”

That’s a defensible bet in some categories. It’s not defensible here, because the incumbents’ problem framing is broken. Loopio, Responsive, Qvidian, Qorus — all four were built when the proposal team’s job was to manage a content library and stitch answers together by hand. Their products are CRMs for paragraphs. The AI layer was bolted on after 2023. The seams show.

1up wrote a piece earlier this year that put it cleanly: “Most RFP tools are mostly just knowledge management.” They’re right. We don’t want to be a better knowledge-management tool. The proposal industry already has good ones, and the customers complaining about Loopio aren’t asking for Loopio with a faster search bar.

What customers actually complain about

If I read every Loopio and Responsive review on G2 and Capterra in a single sitting — which I have done, twice, this year — the complaints cluster into four buckets:

  1. The AI is wrong. “Magic” suggests answers that aren’t supported by the content library. Reviewers spend more time fixing the suggestion than they would have spent writing from scratch.
  2. The library rots. The content is six months out of date by the time it’s surfaced. Nobody owns refreshing it. Nobody knows which answers are stale.
  3. The search is terrible. Keyword-match in a semantic world, exactly as Responsive’s G2 reviewers describe. The right answer is in the library; the search can’t find it.
  4. The UX is heavy. Trained users tolerate it. New users bounce. The product has accumulated fifteen years of features and the surface area shows.

None of those problems gets fixed by parity. The third one — search — gets worse the more features we add to the surface. The first two are not feature problems. They are the central thing.

What we’re building instead

Three bets that are not parity bets.

Grounded retrieval over the customer’s actual KB. Every drafted sentence cites the block it came from. The drafting engine refuses to answer when retrieval scores below threshold. We measure precision@5 and claim-coverage on a labeled gold set (the eval pillar piece has the methodology). This is not a feature in the Loopio sense. It is the load-bearing layer everything else sits on. Get it wrong and everything we ship on top is decoration.

Content freshness as a product feature. Every block has a last-used date, an approver, an expiry. Stale blocks surface in dashboards. The KB rots, and the product tells you it’s rotting before the customer’s evaluator does. The shipped freshness alerts post covers the mechanics.

Win-loss intelligence written back to the KB. Every closed bid runs the post-mortem questions a color-team would run, and the answers go back into the corpus the next bid draws from. The eight-stage pipeline post describes why this is the compounding loop.

These three are not features Loopio doesn’t have. Loopio has versions of all three. The difference is that they are the primary product surface for us, not a tab in the secondary navigation. A customer’s first hour with PursuitAgent is spent watching grounded retrieval work on their own questions. A customer’s first hour with Loopio is spent loading content into a library.

Where parity actually matters

I want to be honest about where I think the parity argument has merit.

Integrations. If you can’t connect to Salesforce, Google Drive, SharePoint, and Slack, an enterprise buyer will not even take the call. We are at parity here, not because integrations are interesting, but because not having them is disqualifying.

Permissions and audit trails. Enterprise customers need per-block permissions, role-based access control, and an audit log they can show a compliance auditor. We are at parity here for the same reason.

Export fidelity. A proposal that renders correctly in Word, PDF, and the buyer’s portal is non-negotiable. We are at parity here because customers cannot tolerate “the formatting was a little off” in a $5M bid.

The pattern: parity matters where parity is the floor for being considered. Above the floor, parity is a tax on engineering attention that should be spent on the bets that distinguish us.

Where this goes wrong

The risk in this argument is that we build a product that is excellent at three things and inadequate at twenty. Buyers don’t grade on excellence; they grade on complete coverage. A buyer who needs the twenty-first thing — say, multilingual proposal generation, or a specific content-library import flow from Sharepoint Online — will not buy from us, and the size of the segment that needs the twenty-first thing is non-trivial.

I’m willing to take that risk because the alternative is Loopio with a smaller engineering team, and that’s a worse business and a worse product. But I want to be clear-eyed about what we’re giving up.

The takeaway

Parity is what you set as a goal when you can’t think of a better one. The better one, here, is to fix the things customers are loudest about — wrong AI, rotting libraries, bad search — and let the rest accumulate at the speed customers ask for it.

The day a Loopio reviewer writes “I switched because the answers were actually right” is the day the parity argument loses, permanently. We’re building toward that day.

Sources

  1. 1. Capterra — Loopio reviews
  2. 2. G2 — Responsive (formerly RFPIO)
  3. 3. 1up — The problem with RFP software
  4. 4. PursuitAgent — The Loopio teardown