Blog · Page 18
Field notes.
Page 18 of 31. Browse the archive of RFP workflows, grounded-AI architecture, and proposal operations notes.
Hallucination monitoring in production
The metric we watch weekly: per-claim refusal rate, citation-mismatch rate, and the human-graded sample. What we do when each one moves, and the threshold values that trigger an alert.
The RFP section priority matrix
Evaluator weight times effort hours equals where to spend the draft budget. A simple matrix that tells you which sections deserve gold-team review and which sections deserve a paragraph and a citation. With three worked examples.
Semantic deduplication of KB blocks at ingest
How we merge near-duplicate KB blocks at ingest time using embedding similarity, the threshold we settled on after testing four values, and the trade-off we accept by tuning toward over-merging.
The overpriced document repository trap
An opinion piece on why most RFP tools end up unused. The reviews tell a consistent story across Loopio, Responsive, and Qvidian: teams pay for AI features and end up using a search box. We have a theory about why.
Q3 2025 RFP volume, by sector and state
What SAM.gov and state procurement portals tell us about Q3 2025 RFP volume. The sectors growing, the states moving, the federal categories rebounding from a slow Q2, and the data we cannot reconcile.
In preview: the retrieval-eval dashboard, publicly visible
Our internal retrieval evaluation dashboard is going public in preview. Real gold-set numbers, real regressions, updated nightly. Here is what is on it and what we deliberately left out.
The federal fiscal-year clock just reset
The federal fiscal year started yesterday. Here is what Q1 procurement volume actually looks like, what bids land in the next 90 days, and how a small proposal team should staff for it.
Our retrieval eval, quarterly report
A quarter of running our retrieval evaluation harness against a frozen gold set: the regressions we caught, the two changes that actually moved precision, and the metric we stopped reporting because it lied.
SME collaboration, Part 4 of 4: a KB your SMEs will actually use
What makes an SME contribute to a knowledge base versus what makes them ignore the tool. Closing the four-part series — the structural choices that decide whether your KB compounds or rots.
Security questionnaires: linking answers to evidence
How a SOC 2 attestation PDF becomes a citation source for DDQ answers. The ingest pipeline, the per-control extraction, and the per-claim linking that makes 'yes' answers verifiable instead of theatrical.
Our own proposal process, in public
How PursuitAgent responds to its own inbound RFPs. The intake, the bid/no-bid, the writing, the gold team, and the parts of the product that don't help us yet because we haven't built them.
Vendor onboarding DDQs across four industries
Finance, healthcare, SaaS, and defense. The same 200 questions in four different rephrasings. A teardown of how the category-specific framing changes what the buyer expects to see in the answer — and what stays the same underneath.
Prefer to see the product?
Take the 5-minute tour, or start a trial workspace and see PursuitAgent draft answers with citations.