Security-questionnaire volume in 2025, the data
Safe Security's 500+/year claim, tested against the volume we see across our own fleet. Category breakdowns, seasonal spikes, and the questions that are growing fastest.
Safe Security published a widely-cited number in 2024: the average regulated mid-market SaaS vendor receives 500 or more security questionnaires per year. The number shows up in vendor pitches, analyst reports, and our own writing. It is worth checking whether the number still holds in 2025, and whether the distribution is anywhere close to what the 2024 headline suggested.
This post reports what we see across our own fleet. The data is aggregated and anonymized; individual customer numbers stay private. The intent is to publish a benchmark that a practitioner can compare their own inbox against.
The headline number
Across the customer accounts we measure, the median regulated B2B vendor received 410 security questionnaires in the 12 months ending October 2025. The 90th percentile received 1,020. The 10th percentile received 85. The mean — pulled up by three outlier accounts — was 520.
Safe Security’s 500+/yr figure is still directionally right. The median is lower than the headline suggests. The mean matches. The distribution is long-tailed, which is the thing the headline misses. A vendor at the 90th percentile is answering 20 questionnaires per week; a vendor at the median is answering eight; a vendor at the 10th percentile is answering less than two. The operational problem is very different at the three points.
Loopio’s published estimate of 15 to 40 hours per questionnaire maps onto these volumes cleanly. The median vendor spends roughly 8,000 to 16,000 hours per year on security questionnaires. The 90th-percentile vendor spends 20,000 to 40,000 hours. That is four to eight full-time employees, just for questionnaires. The staffing implication is why the category has moved from “someone on the security team handles it” to “there is a dedicated questionnaire function reporting into GRC.”
Distribution by framework
The questionnaires cluster by framework. We categorize each incoming questionnaire by its closest template match. The distribution we see:
| Framework / template | Share of volume |
|---|---|
| Custom buyer questionnaire | 41% |
| SIG (Shared Assessments) | 18% |
| CAIQ (Cloud Security Alliance) | 12% |
| SOC 2-aligned custom | 11% |
| ISO 27001-aligned custom | 8% |
| HIPAA-aligned (healthcare buyers) | 5% |
| PCI-DSS-aligned (payments buyers) | 3% |
| Government / FedRAMP-adjacent | 2% |
The “custom buyer questionnaire” category is the largest and the most misleading. When we look inside those custom questionnaires, 75% of their questions map cleanly to a SIG, CAIQ, or SOC 2 question. The buyer’s procurement team wrote their own template, but the questions are paraphrases of the standard frameworks. The structural reuse in the KB holds even when the buyer’s cover sheet looks unique.
SIG released content updates in 2024 that added AI-specific controls. CAIQ shipped v4 with expanded third-party risk sections. Both changes show up in the 2025 inbox. A KB that was built against SIG 2022 and CAIQ v3.1 will have coverage gaps on the new controls. We see those gaps as refused auto-answers in customer data; they are the largest single source of SME escalation.
Seasonal spikes
The volume is not flat across the year. Four peaks repeat:
Late Q1 (February–March). New procurement cycles open at enterprise buyers whose fiscal year starts January 1. The vendor-risk teams are staffed up, the contracts calendar is filling, and new questionnaires go out in bulk. We see a 30% bump versus the annual baseline.
Late Q3 (September–October). Federal fiscal year-end on September 30 and commercial annual contract renewal cycles coincide. Security-questionnaire volume bumps 40% over baseline. This is the spike the October federal FY clock post covered in more detail.
Mid-November through mid-December. Year-end procurement push. New vendor onboarding before the holiday freeze. We see this peak cresting right now as we publish this post — mid-November volume is running 45% above baseline.
January. The quiet month. Volume drops 15% below baseline. Procurement teams are recalibrating; new vendor onboarding hasn’t started; the inbox is the slowest of the year. This is the month to invest in KB maintenance.
The peaks matter operationally. A questionnaire team staffed to handle median volume will be underwater during the November peak. Teams that staff to peak have idle capacity in January. The reallocation strategy — move engineers into the questionnaire queue during peaks, move them back to KB work during troughs — shows up in the rolling capacity plans we’ve written about elsewhere.
Questions that are growing
We also categorize incoming questions by topic. Five categories are growing faster than the overall volume, which means they are claiming share from other topics:
- AI and LLM usage. “Do you use third-party LLMs? Does customer data touch those models? How is training-data exclusion enforced?” These questions barely existed in 2023 Q1. They are now present in 78% of questionnaires we ingest. Growing fast.
- Subprocessor transparency. Driven by GDPR enforcement and the wave of EU data-residency negotiations. Buyers want a current subprocessor list, notification rights, and contractual limits on adding new subprocessors.
- Supply-chain security. SBOM (software bill of materials) questions, dependency review practices, build provenance. The volume on these tripled between 2023 and 2025.
- Incident-response timelines. Buyers are asking for tighter notification windows — 24 hours, 48 hours, 72 hours — and asking how the vendor tracks the clock internally.
- Regulatory-specific questions. State-level privacy laws (California CCPA/CPRA, Colorado CPA, eight more state laws since 2023) are being embedded into enterprise questionnaires even where they do not technically apply.
Five categories that are shrinking:
- Physical security at vendor offices. The remote-work shift made the question less relevant for most SaaS vendors.
- On-prem deployment requirements. Still asked, but a smaller share.
- Specific antivirus/EDR product questions. Replaced by outcome-based questions.
- Password policy specifics. Replaced by MFA/SSO control questions.
- Generic “describe your security program” prose. Replaced by framework-mapped questions.
What the data says about tooling
Three practical readings come out of the numbers.
The tail is where the new work lives. The 80% of questions that repeat are solvable by retrieval. The 20% that do not repeat include the fastest-growing categories — AI/LLM questions, supply-chain questions, novel regulatory questions. A KB that is updated once per audit cycle will be one to two years behind the growing categories. Quarterly updates to the security KB are not optional in 2025.
Median vendors need different tools than 90th-percentile vendors. A vendor at eight questionnaires per week can run a largely-manual workflow with retrieval assistance. A vendor at 20 per week cannot. The operational cliff sits somewhere between 12 and 15 per week, and below that cliff most teams we measure are under-tooled but functional; above it, the team cracks.
The custom-buyer questionnaire framing is a trap. If a vendor’s tooling is based on “we will hire analysts to answer custom questionnaires by hand,” the 41% custom-questionnaire share looks intimidating. If the tooling maps custom questions onto canonical KB blocks first, the 41% collapses into the same underlying work as SIG and CAIQ. The framing choice is the tooling choice.
Gaps in the data
Three things we can’t measure from our fleet alone.
We cannot report on vendors who do not use a tool. The teams answering questionnaires in Word documents are not in our dataset. The Safe Security survey captured some of that population; ours does not.
We cannot report on the buyer-side time cost. How long buyers spend grading questionnaires, which questions they actually read carefully, which ones they skim — that data sits on the buyer side and is not published. The APMP salary survey captures some vendor-side data but does not touch the buyer-side grading effort.
We cannot report on questionnaire outcomes. Winning a deal is not a directly-observable function of questionnaire quality; other deal factors dominate. The best we can say is that questionnaire response time and the fraction of questions that escalate to SMEs are correlated with deal velocity, not with close rate.
For a tighter look at the mechanics of the 80/20, the pillar dropping Thursday covers the retrieval layer in depth. For the craft side of staffing, Sarah’s team-structure post is the companion piece.