Blog · Page 22
Field notes.
Page 22 of 31. Browse the archive of RFP workflows, grounded-AI architecture, and proposal operations notes.
A past-performance story in three sentences
The compressed form that reads well, scores well, and survives the page-budget cut. One example, annotated.
Confidence scores for grounded drafts, explained
What '82% confident' means in our drafting engine, how it's computed from retrieval and entailment signals, and where it leads the reviewer.
Discriminators: the word your evaluator was trained on
APMP calls them discriminators. Most teams don't write them. Three real examples from awarded proposals — what they did, why they worked.
Streaming drafts over SSE, with citations inline
How we stream draft output to the browser while keeping citation integrity intact. The architecture, the failure modes, and the part we got wrong twice.
Feature parity is the wrong competitive goal
Chasing Loopio's feature list would kill us. Here's why we picked a different target — and the product we're building because of it.
QvidianPro reviews, five years in retrospect
Sentiment trajectory across 200+ public reviews of Upland Qvidian. Where reviewer language stayed consistent, where it shifted, and where the product stopped tracking the market.
In preview: question router v2 with confidence scores
DDQ questions now route with a confidence score in preview. High-confidence routes auto-draft from the KB; low-confidence routes to human review with a typed reason for the routing call.
The Friday-afternoon submit is a code smell
When proposal teams routinely submit at 4pm on Fridays, the late-week pattern reveals capacity and capture-hygiene problems upstream. What the smell tells you and what to fix.
The complete bid/no-bid scoring framework
The canonical bid/no-bid framework. Five variables scored 1–5, weighting, the rubric template, the bid-decision meeting, override discipline, and where the rubric is honestly wrong.
Bid/no-bid is a decision, not a vibe
A preview of Thursday's pillar piece. Why most teams score implicitly, what implicit scoring costs, and the meeting structure that turns a vibe into a decision.
How we curate the retrieval gold set
120 questions, three annotators, a disagreement-resolution protocol. The recipe behind the held-out set we evaluate every retrieval pipeline change against — and the parts we plan to open-source.
Mandatory vs. desirable requirements, in plain English
The distinction that costs bidders contracts. Four examples of how mandatory and desirable requirements look in real RFP language and how to score them differently.
Prefer to see the product?
Take the 5-minute tour, or start a trial workspace and see PursuitAgent draft answers with citations.