Field notes

The Friday DDQ batch we process in under an hour

What automation does to a weekly batch of security questionnaires, and the four things it still can't do.

PursuitAgent 3 min read Procurement

Every Friday morning a queue of security questionnaires lands in the proposal team’s inbox: SIG Lite excerpts, CAIQ refreshes, custom enterprise security questionnaires from buyers running a procurement window. Last quarter the batch averaged four documents and 720 questions. The team gets through it before lunch. Here is what does and what doesn’t.

What automation handles

Question parsing and deduplication. Across four questionnaires there are typically 200–300 unique questions. The other 400+ are restatements of the same five things (“describe your incident-response process,” “list your subprocessors,” “describe encryption at rest”). The ingest pipeline (the multi-doc ingest post covers the mechanics) hashes question stems and groups them. A single canonical answer covers the deduplicated set.

Routing. Each question gets classified by domain — security, privacy, availability, finance, legal — and routed to the SME team that owns the answer. The classifier is the DDQ classification feature shipped in May. Confidence below 0.7 routes to a human triage queue; about 8% of questions land there each week.

Drafting from the canonical answer set. Every previously-answered question has a canonical KB block. Retrieval finds the block, the drafting engine renders the answer in the form the new questionnaire expects (Yes/No/Partial, free-text, controls reference), citations attach, and the reviewer sees a draft.

Diffing against the last response. When this is a quarterly refresh of a questionnaire we already answered, the engine surfaces the diff: questions that changed, answers that have a fresher source block, answers that are now stale. The reviewer reads the diff, not the whole response.

That gets the four-doc batch from raw inbound to reviewer-ready in about 25 minutes.

What automation doesn’t handle

The “describe your approach” essay questions. When a buyer’s questionnaire asks for a 200-word free-text response on “your approach to insider threat,” there is no canonical block that maps cleanly. The retrieval grabs adjacent material; the engine drafts; the reviewer rewrites about half of it. Time saved over starting from scratch: roughly 40%. Not 90%. We are honest about that — the arphie write-up on questionnaire automation makes the same observation across the category.

Questions that require the reviewer to know something the KB doesn’t. “Has your platform been deployed in a HIPAA environment with more than 50,000 records?” If the deployment hasn’t been catalogued in the KB, the engine refuses (correctly), and the reviewer answers from memory or asks the customer-success team. Refusal is the right behavior; it is also a reviewer-time cost the automation didn’t remove.

Cross-questionnaire reconciliation. Buyer A’s questionnaire and buyer B’s questionnaire both ask about subprocessors. Buyer A asks for a count; buyer B asks for a list. We sometimes ship a list to A and a count to B because the canonical block has both, and the rendering picked wrong. This is a known failure mode; fix is in the queue.

The buyer-specific yes-but. A questionnaire asks “do you support customer-managed encryption keys” and the canonical answer is “yes, with the constraint that we don’t support BYOK across our analytics tier.” The drafting engine surfaces the canonical answer; the reviewer adds the buyer-specific framing. The framing is the reviewer’s job and it stays the reviewer’s job, because it is sales work, not knowledge-management work.

The math, weekly

  • Inbound: 4 documents, ~720 questions
  • After deduplication: ~250 unique questions
  • Auto-drafted with high-confidence citations: ~60% (~150)
  • Auto-drafted with review badge: ~25% (~63)
  • Refused or human-triage routed: ~15% (~38)
  • Reviewer time: ~50 minutes total across the batch

Five years ago the same batch was a person-week of work — consistent with the Safe Security write-up on the average enterprise’s security-questionnaire load. Today it’s an hour. Five years from now we expect it to be twenty minutes, with the residual being the essay questions and the buyer-specific framing — work that probably doesn’t compress further because it isn’t compressible.

The takeaway

Automation flattens the parts of the DDQ that are knowledge-management work. The parts that are sales work — buyer-specific framing, judgment about what to disclose, drafting essay responses — stay reviewer time, and the productivity story is honest only if you say so out loud.

Sources

  1. 1. Safe Security — Vendor security questionnaire best practices
  2. 2. Arphie — How AI is transforming security questionnaire processes
  3. 3. PursuitAgent — DDQ response playbook