Field notes

The roadmap bet we rejected

One feature customers asked for through most of year one that we declined to build, and the reasoning. A short founder note on saying no to the thing that would have been popular and wrong.

Bo Bergstrom 4 min read Category

Through most of year one, the single most frequently requested feature in customer calls and sales-qualified discovery was a mode that would let the drafting agent produce a full-length response without any supporting KB content — a “write from the model” mode that would fill the page and let the proposal writer edit backwards from a finished artifact. We said no every time, and we are going to keep saying no. This post is why.

The request is sympathetic. Proposal writers are on deadlines. A blank page is harder than a drafted page. If the model can produce a plausible first pass that the writer can then sharpen, the cold-start cost goes down and the cycle time goes up. Most of the competitive category offers exactly that feature and markets it as the AI value proposition. The request arrives every week in some version of “can your tool do what Tool-X does when the KB is thin.”

The reason we reject it

The Grounded-AI Pledge is the reason. The pledge says every drafted paragraph cites the KB chunk it was drawn from, and ungrounded questions produce an explicit refusal rather than a confident guess. A mode that drafts full-length content from the model’s training data breaks the pledge by construction. There is no clever UI wrapper that makes it not break the pledge. You either have the guarantee or you do not.

The second-order problem is worse than the first-order problem. Once a proposal writer has a plausible-looking draft on the page, they will ship it. The edit-backwards workflow reads well in theory and collapses in practice. I have seen it collapse in three customer environments in the last year — not PursuitAgent customers, but the same people talking about how they use tools that offer this feature. The common failure is that a paragraph written from model training, polished lightly, and submitted to a regulated buyer fails the traceability test when the buyer’s evaluator asks where the claim came from. The answer is: nowhere. The model made it up. The proposal team does not find out until after submission.

A third reason, which is the one that actually settles the argument internally: the feature is a trap for the category. Once a vendor ships a “draft from the model” mode, the roadmap gravity pulls toward making that mode better — better prompts, better post-hoc rewriting, better hallucination filters. Every unit of engineering attention that goes into that mode is attention not going into making grounded retrieval better. We know what category we want to build in. It is not that one.

What we ship instead

The thing we ship instead is a much more boring answer. When the KB is thin, we make the gaps legible. The drafting pipeline refuses the question and tells the user exactly what KB material would let it answer, and surfaces a KB-build flow that lets them add the material in the same session. The user loops back to the draft with the gap filled. The cycle time goes up modestly for the first draft and goes down significantly on the second and third drafts for adjacent questions, because the KB is now better than it was.

That is the less-satisfying answer. It is also the honest one.

What this cost us

It cost us some deals. I am confident it cost us at least three sales cycles in year one where the buyer wanted the write-from-model capability and chose a competitor that offered it. I know the names of those deals. I am not going to publish them, but I have written them down and I re-read the list once a quarter.

The reason I re-read the list is that saying no to a popular feature in public commits you to continuing to say no later. The decision is easy when the tradeoff is clean. It gets harder when a specific six-figure deal is on the line and the buyer is asking for the feature as a requirement. In those conversations the thing that holds the line is knowing that every prior rejection is on the record. You cannot back into the feature later without explaining why the last twelve rejections were wrong.

The honest caveat

There is a version of this feature that is not a trap. A mode that drafts from the model but labels every paragraph as ungrounded and refuses to let the user submit the proposal without first grounding each paragraph against a KB source — that is a tool, not a risk. We have not built it because the sequencing is wrong: grounding after the fact is a harder UX problem than grounding during drafting, and we want the grounded path to be the good path. If we ever ship the ungrounded-with-mandatory-grounding mode, it will be because the grounded path is mature enough that the ungrounded mode is an acceleration rather than an escape hatch.

We are a long way from that. In the meantime, the answer is no, and the reason is in the pledge.

Sources

  1. 1. PursuitAgent — The Grounded-AI Pledge in code
  2. 2. PursuitAgent — Why we're writing this blog