Field notes

The DDQ answer-reuse myth

The pitch is: every DDQ is mostly the same, so reuse the answers. The reality is: every DDQ is mostly similar but just different enough that naive reuse fails. The gap between similar and identical is where the work lives.

PursuitAgent 5 min read Procurement

The pitch most DDQ-automation vendors make goes roughly: every due diligence questionnaire is essentially the same, the answers repeat, the work should be a copy-paste. The pitch is half right, and the half it gets wrong is the half that produces every DDQ-automation horror story you have heard.

The half it gets right: DDQs and security questionnaires overlap heavily. A vendor responding to one this quarter is responding to questions that 60 to 80 percent overlapped with the last quarter’s. The throughput pattern works because of that overlap — the retrieval pipeline shipped last week leans on it directly.

The half it gets wrong is the framing. 1up.ai put it crisply: “Most questionnaires are quite similar, but just different enough that you can’t copy/paste every answer, and re-writing takes just as long as creating a response from scratch.” That gap — similar but not identical — is the entire problem. If you treat DDQs as a copy-paste workflow, the gap becomes invisible. The team copies, ships, and discovers in week three that the buyer’s specific phrasing of the question wanted a different answer shape.

What naive reuse looks like

A team has a content library — a Google Doc, a Notion page, a Confluence space, sometimes a “DDQ master document” — with the canonical answers to the questions that appear most often. When a new questionnaire arrives, the workflow is:

  1. Match each question to the closest canonical answer.
  2. Copy the canonical answer in.
  3. Submit.

This works for the questions where the canonical answer is verbatim correct. It fails for the questions where the canonical answer is almost correct and the vendor’s answer needs a small adjustment that nobody noticed.

The fail modes I have watched happen:

Tense and aspect. The canonical answer says “we maintain SOC 2 Type II compliance, audited annually.” The new buyer asks “have you completed a SOC 2 Type II audit in the last 12 months?” The canonical answer is paraphrased to fit, and the team submits “yes, we maintain annual SOC 2 Type II compliance.” The buyer reads this and follows up — they wanted a yes/no with a date, they got prose. Small, but it adds friction and signals lack of attention.

Specificity gaps. The canonical says “we encrypt data at rest.” The new buyer asks “do you encrypt data at rest using AES-256 or stronger?” The team copies “we encrypt data at rest” and the answer is technically not responsive. The buyer either asks again (delaying the procurement by two weeks) or scores the answer down silently.

Boundary cases. The canonical answer covers the company’s primary cloud environment. The new buyer is asking about a sub-product the company runs on a different platform with a different security posture. The team does not notice the boundary, copies the canonical, and now has a misrepresentation in the response. Worse: a misrepresentation in a regulated context, where the vendor has signed an attestation that the answer is true.

Stale facts. The canonical was written when the company had two SOC 2 audit reports. It still says “two.” The team has since produced a third report. The canonical is stale by six months. The reused answer is technically wrong, even though the company’s actual posture is stronger than the answer states.

These are not exotic edge cases. They are the median failure mode of “just reuse the answers.”

What works instead

The shape that works is what we built the pipeline around: reuse as a suggestion, not as an autopilot.

The system retrieves the closest prior approved answer and offers it as a starting point. The starting point is shown alongside the new question so the writer can spot the gap. The writer’s job is to read both and decide whether the prior answer fits, fits with adjustment, or does not fit. The system is doing the retrieval; the human is doing the boundary-spotting.

This is slower than copy-paste in any individual case. It is faster than rewriting from scratch on most questions. It produces fewer errors than copy-paste on the questions where copy-paste would have failed silently. The throughput math works because most questions cluster on “fits with minor adjustment” — read the prior answer, edit one phrase, done in 60 seconds.

The questions that do not cluster — the ones where the buyer is asking something materially different — get flagged because the writer is reading the prior answer rather than auto-pasting it. Surface area for boundary-spotting is the entire point.

What changes for buyers

Buyers reading vendor responses can usually tell when reuse was naive. The patterns: tense mismatches, generic answers to specific questions, answers that address an adjacent topic rather than the asked one, attestations that are subtly broader or narrower than the question allows.

Safe Security observed that organizations end up “collecting hundreds of reassuring ‘yes’ responses that don’t reflect real security posture.” That collection is a function of buyers reading naive-reuse responses. Vendors who treat reuse as suggestion-and-review rather than copy-paste produce responses that read as specifically addressed to the question — which evaluators reward, because evaluators are looking for evidence that the vendor read what was asked.

The reuse rate is a useful internal metric. The naive-reuse rate (questions where the answer is verbatim from a prior questionnaire) should be low. The supervised-reuse rate (questions where the prior answer was the starting point but the writer adjusted) should be high. The ratio of those two tells you whether the team is using its DDQ tooling well.

The short version

Reuse is real. The 60 to 80 percent overlap across DDQs is not a marketing number; it is the actual pattern in the data. The myth is that the overlap is identity rather than similarity. Treat reuse as a retrieval step that produces a starting point a human reviews, and the throughput holds without the silent-failure tax. Treat reuse as copy-paste, and you ship the boundary errors that get scored down or, worse, end up in a regulated attestation.

The pipeline is not the differentiator. The discipline of the people running the pipeline is.

Sources

  1. 1. 1up.ai — The problem with RFP software
  2. 2. Safe Security — Vendor security questionnaire best practices