Field notes

Anatomy of an enterprise SaaS RFP, 2025 edition

An annotated teardown of a representative enterprise SaaS procurement. Data-residency, AI-usage clauses, three recurring red flags, and where the 2025 version has diverged from the 2024 template.

The PursuitAgent research team 9 min read RFP Mechanics

This is a structural teardown of an enterprise SaaS request-for-proposal as it lands in late 2025. The document we describe is a composite — built from the patterns we see across dozens of real enterprise-buyer RFPs moving through our pipeline — not a specific published RFP. We describe the shape, flag what has changed since the 2024 version of the same kind of document, and call out three red flags that appear often enough to be worth naming.

If you are responding to enterprise procurements and your internal template for analyzing them was built in 2023, this post is your gap analysis.

The shape of the document

A representative 2025 enterprise SaaS RFP from a Fortune 1000 buyer runs 55 to 90 pages and breaks into eight major sections, not counting appendices.

SectionPagesWhat it contains
1. Introduction & business context3-5Buyer’s strategic initiative, scope summary, timeline
2. Functional requirements12-18Feature matrix, often in spreadsheet appendix
3. Non-functional and operational requirements6-8Performance, availability, scalability, support SLAs
4. Security and compliance10-16Posture, certifications, increasingly a large AI-usage section
5. Data handling and residency4-8Where data lives, sub-processor list, cross-border transfer
6. Implementation and change management4-6Onboarding, training, migration, success metrics
7. Pricing and commercial terms3-5Commercial model, discount mechanics, payment terms
8. Legal and contracting terms6-10MSA redlines, indemnification, liability caps
Appendices: feature matrix, security questionnaire, references, case studies15-30Structured response templates

The 2024 equivalent of this document averaged 15 to 25 pages shorter. The growth is concentrated in two sections: security-and-compliance (+50% in average page count) and data-handling (+30%). Every other section is roughly stable.

What has changed since 2024

Six documented shifts:

AI-usage is now a first-class section, not a footnote. Enterprise RFPs in 2024 rarely asked about AI usage. 2025 RFPs ask about it directly and at length. Typical questions include:

  • Does your product use generative AI in any customer-facing workflow.
  • What is your training-data policy for customer data.
  • Which third-party model providers do you integrate with and under what data-handling agreements.
  • What is your human-in-the-loop posture on AI outputs.
  • What is your incident-response posture for AI-specific failure modes (hallucination, prompt injection, data leakage via model).

In our Q4 volume retrospective, the AI-and-model-usage bucket tripled its share of question volume year-over-year. This section of the RFP has arrived.

Data-residency is more granular. 2024 RFPs asked “where does the data live.” 2025 RFPs ask “which specific data classes live where, who has access to each, and under which jurisdiction’s legal compulsion regime.” The detail level has moved from a single answer to a matrix.

Sub-processor transparency is required. The list of third parties who touch customer data is now expected in the response itself, not in an MSA exhibit that appears later. Vendors who treat sub-processor disclosure as a pre-contract negotiation item rather than a response requirement are losing points at evaluation time.

AI-specific indemnification clauses. Buyers’ legal teams are adding language in Section 8 that specifically addresses outputs produced by AI systems — copyright infringement in model outputs, liability for hallucinated claims surfaced in product workflows, and obligations to disclose when AI is being used in customer-serving processes. Vendors need to know their own internal legal position on these clauses before they read them in an RFP for the first time.

Breach-notification timelines have tightened. The median required breach-notification window in our sample moved from 72 hours in 2024 to 48 hours in 2025. Some regulated-sector buyers require 24.

Evidence-attachment lists are longer. The 2024 version asked for SOC 2 and a certificate of insurance. The 2025 version asks for SOC 2, penetration-test summary (last 12 months), third-party AI-audit report if applicable, business continuity plan excerpt, supplier diversity documentation, and a privacy impact assessment. Safe Security’s research on the evidence-attachment trend is consistent with what we see.

Three red flags

Three patterns appear in roughly a third of the enterprise SaaS RFPs moving through our pipeline. None is illegal. All are worth flagging at bid/no-bid.

Red flag 1 — the stealth RFQ

The document looks like an RFP. The evaluation framework in Section 5 is weighted 70% on price. Vendor differentiation on technical or operational axes cannot move the score enough to overcome a cheaper competitor. The buyer has issued a request-for-quotation wearing an RFP costume.

Tell: read Section 5’s evaluation weights before reading anything else. A section that weights price above 60% is, functionally, a price competition. A vendor who cannot compete on price should either pass or plan their response around a narrow high-margin scope instead of chasing the whole RFP.

Red flag 2 — the compliance trap

The functional requirements section lists 50 to 80 mandatory items. Half of them are edge features the buyer’s operational team has never used but was told to include. The RFP’s mandatory-items language means a single “no” disqualifies the response.

Tell: mandatory items that read like a checklist pulled from a vendor-capabilities site are usually not the buyer’s actual needs. The capture work — if you can get to the buyer’s operational team before responding — is worth far more than effort spent trying to claim capabilities you don’t have. Fairmarkit has written about the buyer-side pattern that produces this: operational teams draft requirements lists from everything they’ve heard of. The compliance trap on the vendor side is the downstream effect.

Section 8 contains non-negotiable terms that the buyer’s legal team knows most vendors will redline. The dare is whether the vendor spends the response effort to engage with the terms or treats them as boilerplate. Buyers who write these terms are signaling that they will value vendors who engage substantively — usually larger, more legally-resourced vendors — and that they would rather lose smaller bidders than soften the terms.

Tell: the terms will often address topics like uncapped indemnification for security breaches, liquidated damages for SLA misses, or ownership of customer-data derivatives. A vendor who cannot afford the legal work to respond substantively should pass. A vendor who can should invest in the legal response, because the buyer is watching for it.

How this changes response structure

A response to a 2025 enterprise SaaS RFP has to allocate effort differently than a response to the same buyer’s 2024 RFP. Specifically:

  • AI-usage section gets treated as a technical volume in its own right. This is not a one-page appendix. It is a full-voice response to a substantive section. Vendors without a canonical AI-posture document cannot compete on velocity here; the first response will take days, and the second one will take a fraction of that if the canonical document is produced.
  • Data-residency and sub-processor disclosure get produced from a live inventory, not from a document written last year. The inventory must be owned. Someone on the vendor side has to be accountable for it being current at any given moment.
  • The security questionnaire in the appendix is answered from the KB, not drafted fresh. The recurring-question pattern is strong enough that teams without retrieval on the questionnaire will spend 20+ hours on questions a team with retrieval answers in two. VisibleThread’s observation that understanding the requirements is the #1 differentiator applies harder on the questionnaire than on the narrative sections.
  • The legal response is pre-drafted. The common redlines should live in a legal-position document the proposal team can pull from without a fresh legal review per RFP. Fresh review for novel terms; pre-approved text for the common 80%.

What this post can’t tell you

Two honest limits.

We cannot publish any specific buyer’s RFP. The composite we describe is defensible because it reflects patterns we see repeatedly, but a reader who wants to anchor this to a specific named procurement will have to do that work against their own pipeline.

We cannot claim generalized win-rate data at this shape. Win rates depend on buyer, sector, incumbency, and fit. The structural teardown is useful as an analytical lens; it is not a prediction tool.

How the scoring framework has shifted

Section 5’s evaluation framework has also evolved. In 2024, the typical weighting on a non-price-dominant enterprise SaaS RFP landed roughly: functional (40%), operational (15%), security and compliance (15%), references and past performance (15%), price (15%). The 2025 equivalent we see most often in our sample: functional (30%), operational (15%), security and compliance (25%), references (10%), price (10%), and a new “responsible AI and data handling” band at around 10%. The shift is directional, not uniform; specific buyers weight these categories very differently, and some buyers still weight price heavily enough to disqualify everything else.

Three implications:

References are scored lower but scrutinized harder. The 10% weight is a floor on how much a weak reference pool can hurt a response. A reference that cannot confirm the specific operational claims in the proposal is now actively damaging at the reference-check stage, not neutral. Vendors who keep their reference narrative consistent with their proposal narrative — same claimed metrics, same operational descriptions — score consistently. Vendors whose reference contacts haven’t been briefed on the specific bid narrative score down.

The “responsible AI” band is treated as pass/fail inside a weighted framework. Even at 10%, failure here functions as a disqualifier in most buyers’ internal scoring rubrics. Score zero on the AI band and the overall tally collapses. A vendor without pre-approved responses to this section cannot rely on strength in other sections to compensate.

Price weighting is moving in two directions at once. Regulated-sector buyers (healthcare, finance) are weighting price lower because compliance and AI posture are absorbing the delta. General commercial buyers under cost pressure are weighting price higher. Treat the RFP’s stated weights as load-bearing information; they are no longer a template the buyer copied from last year’s document.

What the teardown is useful for

Three practical applications for a proposal team reading this post:

  • As a template-refresh trigger. If your response template was built against 2024-era buyer expectations, the section weights and required evidence list are stale. The gap shows up in the responses.
  • As a bid/no-bid filter. The three red flags above are each reasons to pass on a bid, or at least to scope response effort narrowly. A team that reads Section 5, Section 2, and Section 8 before committing saves effort on bids that were going to be lost at intake.
  • As a capture-plan input. A capture plan that anticipates the 2025 shape has a better shot at targeting its three win themes against what the buyer will actually score. A capture plan built for the 2024 shape under-serves the AI and data-handling dimensions.

The takeaway

The enterprise SaaS RFP in 2025 is 30% longer than it was in 2024, and the growth is in security, data-handling, and AI-usage. A vendor whose template hasn’t been updated for two years is responding to the 2024 document from the 2025 version, and the gap is visible in the response. The update is not optional. It is the work of Q1 2026.

Sources

  1. 1. VisibleThread — Government proposal writing key steps
  2. 2. Fairmarkit — 4 RFP pain points
  3. 3. Safe Security — Vendor security questionnaire best practices
  4. 4. PursuitAgent — Q4 RFP volume retrospective