The moat question, revisited
Opinion. A year in, where the durable advantage lives for a grounded-AI proposal product — and where it doesn't. Three candidate moats, two I believe in, one I don't.
People keep asking what the moat is. It is a fair question. A year in, with the AI-first cohort now shipping credible products and the incumbents pouring budget into catching up, the question is sharper than it was on day one. Here is my honest read.
This is an opinion post. I run a company in this category and I have obvious bias. I will try to be fair about what I think is durable and what I think isn’t.
Moat one, which I believe in: the corpus gets better as the customer uses the product
A grounded-AI proposal tool is, structurally, a system that gets more valuable per customer over time as the customer’s corpus gets richer and cleaner. Every bid that ships through our product produces three artifacts — the compliance matrix, the win-theme inventory, the post-mortem themes — that feed back into the KB. The next bid starts from a richer starting point than the last one.
This is the same observation that makes Loopio’s long-term value real when the content library is maintained. The difference, we think, is that our pipeline maintains the library automatically — content-block versioning flags stale answers, post-mortem themes promote and retire claims based on what evaluators actually rewarded, per-claim verification catches when a claim starts contradicting the current source.
The moat here is not that we have better retrieval than a competitor — a reasonable competitor can match us on retrieval within a year. The moat is that we have spent a year shipping the discipline that keeps the corpus credible as it grows, and the next year’s bids from the same customer depend on that discipline holding.
If I am wrong about this moat, the specific way I am wrong is that the discipline is commoditizable. Any vendor that wants to can copy the versioning pattern, the post-mortem auto-theming, the staleness alerts. The patterns are in public posts on this blog. The question is how many of the 40 vendors in the category actually will.
Moat two, which I believe in with reservations: customers whose reviewers audit every claim
The customers who benefit most from grounded AI are the customers whose internal reviewers audit every claim before it ships — regulated industries, federal contractors, healthcare, defense. Those customers have a compliance burden the non-regulated cohort does not. A product that makes their audit 10x faster is a product they cannot replace without taking a step backward.
The lock-in is not from data portability or switching costs in the traditional sense. The lock-in is from workflow: once a customer’s compliance team has standardized on reviewing citations in a specific format, with a specific audit trail, with specific escalation paths, switching the underlying tool means retraining the whole review function. That is a real cost.
My reservation: this moat only works in the regulated cohort. For the commercial cohort — SaaS bids, mid-market services contracts — the compliance burden is lighter and the switching cost is lower. A commercial customer will try whichever tool they think is fastest and switch when a faster one shows up. The moat in regulated land is strong; the moat in commercial land is weak.
The practical implication is that the regulated-customer base is worth more per logo than the commercial base, and the go-to-market should reflect that. It largely does, though we still write too much copy aimed at the commercial cohort where the arguments don’t land as hard.
Moat three, which I don’t believe in: the UI
For a while I thought the UI could be a durable advantage. A fast, uncluttered editor with good keyboard shortcuts and smart review workflows — that could be a reason customers stay.
I no longer think this is durable. Not because the UI doesn’t matter — it does, and we will keep investing in it — but because a UI advantage is copyable in a year. Any team with a React frontend and a sane design sensibility can ship a clean editor. The incumbents have legitimate UI debt we can exploit while they pay it down; that is not the same as a permanent advantage.
The clearer way to see this: the reviews on incumbents that complain about UI today will be complaints about the previous version in 18 months, because the incumbents know the complaint and are rebuilding. A UI moat that evaporates when the competitor ships a redesign is not a moat. It is a head start.
What the moat is made of, bottom line
The moat is compounded discipline around the corpus — freshness, versioning, post-mortem feedback, per-claim verification — wrapped in a workflow that the regulated customer cohort treats as load-bearing. It is not the model, not the retrieval, not the UI individually. Each of those is commoditizable on its own. The combination, practiced consistently over years on a specific customer base, is what gets hard to replicate.
1up’s writeup on the category argues that most RFP tools are “mostly just knowledge management.” That is true of most of the category. The question is whether the version of knowledge management that keeps the knowledge honest is a different product, or a better execution of the same product. I think it is a different product, and the moat comes from building the different product with enough discipline over enough years that the category recognizes it.
If I am wrong about all three moats, the version of wrong that would bite the hardest is this: in a category where models improve every six months and incumbents have unlimited capital to retrofit, no vendor has a durable moat against the model roadmap. The moat is only how fast you move before the category consolidates. I am not sure that reading is wrong. What I am sure of is that moving fast on the two real moats beats moving slow on the one that isn’t.
Ask me again at year two. I will have a clearer answer or a different question.