Vendor risk management, patterns we see on the procurement side
A cross-cut of roughly 200 DDQs from the last six months — the fields that repeat, the fields that vary, and what the repetition tells us about how vendor risk teams actually operate.
This post is a pattern-level look across roughly 200 DDQs that passed through the platform in the last six months. We aren’t publishing any specific customer’s questionnaires or answers. What we can publish is the structural pattern — which fields repeat, which fields vary, and what the shape tells us about how vendor risk teams are actually operating in 2026.
Methodology note
We looked at the distinct question-identifier sets across DDQs that customers classified in the platform. We excluded anything customer-identifying before analysis. We didn’t read answer content. Pattern frequencies below are derived from field-level prevalence in the question sets, not from any particular vendor’s or buyer’s data.
The 20 fields that appeared in more than 70% of DDQs
Across the sample, twenty recurring fields showed up in more than 70% of questionnaires. The top of the list:
| Rank | Field | Share of DDQs |
|---|---|---|
| 1 | SOC 2 Type II audit status and period | 97% |
| 2 | Encryption at rest (algorithm, key management) | 94% |
| 3 | Encryption in transit (TLS version, cipher suites) | 92% |
| 4 | Data residency (named regions) | 90% |
| 5 | Incident response plan (RTO/RPO commitments) | 88% |
| 6 | Subprocessor list (named third parties with data access) | 87% |
| 7 | Background check policy for staff with data access | 84% |
| 8 | Penetration test cadence and most recent date | 84% |
| 9 | Business continuity plan (named DR region, test cadence) | 82% |
| 10 | Access control (MFA, SSO, RBAC granularity) | 81% |
| 11 | Logging and monitoring (retention period, SIEM) | 78% |
| 12 | Data deletion process (on termination, on request) | 77% |
| 13 | Insurance coverage (cyber, E&O, general liability) | 75% |
| 14 | Source-code security review cadence (SAST/DAST) | 74% |
| 15 | Patch management cadence | 73% |
| 16 | Third-party audit frequency and scope | 72% |
| 17 | Vulnerability disclosure / bug-bounty program | 71% |
| 18 | Privacy framework (GDPR, CCPA/CPRA, HIPAA if applicable) | 71% |
| 19 | Employee security training cadence | 70% |
| 20 | Data classification policy | 70% |
The tail — fields that appear in 30–70% of DDQs — is dominated by industry-specific variation: healthcare adds HIPAA BAA clauses, finance adds SOX attestations, federal-adjacent work adds FedRAMP alignment and CUI handling.
What the pattern tells us
Roughly 70% of DDQ questions are interchangeable. The top 20 repeat fields make up the bulk of the questionnaire volume. This matches the Safe Security research on questionnaire fatigue — organizations face rising questionnaire volumes, and the questions are largely reused across buyers with minor wording changes.
The “just different enough” tax is real. The same substantive question (e.g., “Describe your encryption key management process”) appears in at least five measurably different phrasings across the sample. A vendor’s content library has to match across phrasings, which is exactly the retrieval problem we covered in the DDQ response playbook.
Variance is concentrated in 20% of the questionnaire. The repeating 70% is standard; the differentiating 20–30% is where the buyer’s actual risk priorities show up. Those questions are often the most consequential and the most likely to trip up a vendor with a stale KB. They’re also the questions where the buyer’s vendor risk team has actually thought about what they want to know.
Portal diversity is the hidden cost. Of the DDQs in the sample, about 55% arrived via a portal (OneTrust, Whistic, ProcessUnity, and a long tail of smaller GRC tools). Each portal has different field schemas, different attachment limits, and different answer-length constraints. A KB answer that’s tuned to “125 words with a paragraph structure” is wrong for a portal that takes 3-sentence atomic fields.
The repeat-buyer effect
A subset of the sample was repeat DDQs — same buyer, same vendor, second or third round. In those, the structural similarity approached 95%. Repeat buyers recycle their own questionnaires with incremental changes, so the second DDQ is overwhelmingly a re-ask of the first.
This is the compounding case the DDQ playbook describes. A team that writes back to their KB after each DDQ submission can answer the same buyer’s next DDQ substantially faster. A team that doesn’t answers from scratch every time.
What’s changing year-over-year
Comparing this sample against the 2025 reference cut:
- AI governance questions are now in ~40% of DDQs. In 2025 this was ~8%. Buyers are asking about model usage, training data handling, and opt-out rights for their data in vendor AI features.
- Subprocessor detail has expanded. Previously a single-line list; now often requires subprocessor + region + data types accessed + sub-sub-processors named.
- Supply chain questions appeared. Roughly 15% of DDQs now ask about SBOM generation, provenance for open-source dependencies, and signed-commit enforcement. Trace of the continued downstream effect of the 2022–2024 supply-chain incidents.
Gaps in the data
Two gaps worth naming.
Our sample is biased toward mid-market and enterprise B2B SaaS. We have very little visibility into DDQs for regulated industries where the questionnaires don’t flow through third-party software (defense contracting, hospital system onboarding, state-level regulatory filings). The pattern above is the SaaS vendor’s experience, not the universal pattern.
Our sample doesn’t include the buyers’ internal scoring. We see the questions and the answered questionnaires; we don’t see how the buyer’s risk team weighted specific answers. A question that appears in 97% of DDQs might be worth 3% of the score or 30%; we can’t tell from the data.
The takeaway
Vendor risk management, from the procurement side, is running on a largely shared base of questions with a growing differentiating tail. The base is where AI-assisted drafting saves the most time; the tail is where the real risk evaluation happens. A vendor that confuses the two — treating the tail as boilerplate, or treating the base as novel — is miscalibrated.
This analysis is a structural cross-cut, not customer data. Individual DDQs, buyers, and vendors are not identified. The frequencies above are derived from de-identified question-field prevalence.