Field notes

The security-questionnaire response team that actually ships

Three roles, one DRI, a 48-hour SLA. How regulated vendors staff the Q4 questionnaire wave without shipping stale answers or missing deadlines.

Sarah Smith 7 min read Procurement

A security-questionnaire team that ships on time has three named roles, one directly responsible individual, and a 48-hour SLA on SME tickets. A team that does not have this structure tends to miss deadlines or ship answers nobody has actually reviewed. The structure is boring. It works.

This is a companion to the engineering pillar on the 80/20 split in security questionnaires. The pillar covers the machine. This post covers the people.

The three roles

The DRI — directly responsible individual. One person owns the questionnaire end to end. They are not the person who writes most of the answers. They are the person the commercial side calls when the questionnaire is late or contested. On most teams I have worked with, the DRI is a GRC analyst or a senior security-questionnaire specialist — not a security engineer, not a proposal manager. GRC analysts have the right mix of procedure-focus, context on the existing KB, and comfort with the framework vocabulary.

The DRI does four things. They triage the incoming questionnaire into the response workflow within one business day. They ensure every question is categorized — auto-answerable, escalation-bound, or custom — within two business days. They run the SME ticketing queue, including nudging owners whose SLA is slipping. And they run the final quality pass before the response ships.

The retrieval operator. The person who drives the tool for the 80% of questions that map to existing KB blocks. On small teams this is the DRI. On larger teams it is a dedicated analyst. The retrieval operator reviews every auto-drafted answer, checks the citation against the current evidence, and either approves it, edits it, or escalates it. The operator is not writing; they are reviewing machine output and making the accept/edit/escalate call.

A good retrieval operator approves 70% to 85% of auto-drafted answers without edits. Lower than 70% means the KB is drifting or the confidence floor is set too low. Higher than 85% is either a sign that the operator is rubber-stamping — a real risk — or a sign that the KB is unusually mature and the questionnaire is template-heavy.

The SME network. The subject-matter experts who answer the 20% tail. A security engineer for architectural questions, a legal counsel for contractual-commitment questions, a solutions engineer for deployment-specific questions. They do not sit on the questionnaire team; they respond to tickets from it. A regulated mid-market vendor typically has six to twelve named SMEs across engineering, security, legal, and operations who can be paged for questionnaire work.

The important thing about the SME network is that the SLAs are written down and agreed to in advance. “I will respond to questionnaire tickets tagged to me within 48 business hours” is a line in the SME’s role expectations. Without that commitment, every questionnaire becomes a negotiation and every deadline slips.

The 48-hour SLA

The SLA is the single most important operational commitment the team makes. It runs from the moment a ticket is assigned to the SME to the moment the SME either answers, escalates, or declines the ticket. 48 business hours — two working days.

The SLA is not arbitrary. It is calibrated to the typical buyer nudge cadence. A buyer who receives a questionnaire on Monday and does not hear back by Friday will typically send a “can you confirm your response timeline” note. To respond by Friday with a complete package, the vendor needs the SME tail resolved by Wednesday end-of-day. 48 hours from Monday assignment to Wednesday response lines up.

The SLA has teeth only if it is measured. We track SLA attainment per SME per quarter. An SME whose attainment drops below 85% gets a conversation with their manager. Not punishment — context. Usually the reason is workload; sometimes the reason is unclear ticket wording; occasionally the reason is that the SME is not the right owner for that category of question and routing needs to change.

Loopio’s benchmark of 15 to 40 hours per questionnaire includes the SME time. A team that keeps SLA attainment above 90% sits near the 15-hour end of that range. A team that does not tends to sit past the 40-hour end and blow through deadlines.

The weekly cadence

The team I work with the most runs on a weekly cadence that looks roughly like this:

  • Monday 09:00 — Inbound triage. The DRI walks through every new questionnaire that arrived over the weekend. Each gets a response plan: retrieval-only (80% auto-answer), retrieval + two SME tickets (standard), retrieval + four-plus SME tickets (custom-heavy). Each gets a ship-by date.
  • Monday–Wednesday — Retrieval operator drives auto-answers. SME tickets fire on Tuesday morning at the latest.
  • Wednesday afternoon — SME check-in. The DRI walks through the open SME tickets, nudges the ones approaching their 48-hour mark, and reroutes any ticket that has been declined.
  • Thursday — Consolidation and quality pass. All answers are in the response package. The DRI reads every answer in sequence to catch voice inconsistencies and obvious errors.
  • Friday — Ship. Responses go back to buyers. Any that need client-legal review go through a short morning window.
  • Friday afternoon — Retro. 20 minutes, no agenda. What broke this week, what KB blocks are now obsolete, what SMEs were overloaded. Most of the retro output goes into next week’s plan or into the KB maintenance backlog.

A team running this cadence cleanly can ship six to ten questionnaires per week per DRI. Past ten, the DRI saturates and the quality pass degrades. The right move at that point is to split the DRI role across two people, or to redirect some of the incoming volume to asynchronous-first response modes (SIG Lite, CAIQ) instead of custom questionnaires.

Voice and consistency

A questionnaire answered by three people reads like it was answered by three people. Buyers notice. The stylistic choices — “we maintain” vs. “PursuitAgent maintains,” “our platform” vs. “the product,” paragraph length, use of bullets — drift across analysts and the drift shows.

The fix is a house voice document for the security-questionnaire response. Two pages. The first page covers vocabulary — always “customer data” not “client data,” always “production environment” not “prod,” always MFA spelled out on first use. The second page covers sentence structure — “yes” answers open with yes, “partial” answers open with the qualifier before the explanation, numeric answers lead with the number.

I have written separately about how voice consistency beats polish in DDQ answers. The short version: buyers forgive a plainly-written answer. They do not forgive an answer that reads like it was stitched from four different people’s drafts. The house voice doc is the artifact that keeps the stitching invisible.

What does not work

Two patterns I have seen fail on teams I advised.

Rotating the DRI role monthly. Teams try this to spread the burden. It does not work because the DRI relationship with the SME network is what makes the ticket queue fast. A DRI who has worked with the security engineer for six months knows how to write a ticket that engineer will answer in twenty minutes. A DRI three weeks in writes tickets that take two hours. The rotation trades a burden problem for a throughput problem.

Centralizing into a “security-response team” that reports to sales. The commercial side has a legitimate interest in questionnaire response speed. The security side has a legitimate interest in answer accuracy. When the response function reports to sales, the tie goes to speed. That is how you get the Tuesday-night approvals the opinion piece from Monday described. The right reporting line is GRC or security, with a dotted line to sales operations for pipeline visibility.

The three roles, one DRI, 48-hour SLA structure is not the only way to staff a security-questionnaire function. It is the one I have seen work most consistently. For the tooling side of the same workflow, the 80/20 pillar is the companion.

Sources

  1. 1. Loopio — How long does it take to respond to a DDQ?
  2. 2. APMP — 2024 Proposal Professional Salary Survey