Field notes

Federal RFP word counts, 2024 to 2025

What two years of public federal RFPs on SAM.gov tell us about response-document length, page-count caps, and the directional drift of complexity. Research note with sample-size caveats.

The PursuitAgent research team 9 min read Research

There is a folk wisdom in federal proposal work that RFPs are getting longer. The 50-page page-count cap that the GAO complained about in the late 2010s is now the 75-page page-count cap that compliance officers complain about in 2025. Whether the RFPs themselves are longer is a question we can partially answer with public data on SAM.gov — and a question whose answer is more nuanced than the folk wisdom suggests.

This is a research note, not a benchmark. We look at what’s publicly searchable on SAM.gov from January 2024 through May 2025, describe what we can and can’t measure, and report directional findings with sample-size caveats. We do not have a 50,000-RFP corpus and we are not pretending to.

What’s publicly available

SAM.gov is the consolidated federal procurement portal that replaced FedBizOpps. Solicitations posted to SAM.gov include the solicitation document itself, attachments, amendments, and Q&A. The portal supports search by NAICS code, set-aside type, posted date, response deadline, and a free-text search of the solicitation title and synopsis. It does not support, in any first-class way, full-text search across attached PDFs.

What this means in practice:

  • We can identify solicitations by category and date.
  • We can read titles, synopses, and listed attachment metadata.
  • To analyze the contents of an RFP — page counts, section structure, requirement counts, the text of compliance language — we need to download the document and parse it ourselves.

Bulk download from SAM.gov is rate-limited and the structure of attached files varies wildly: some agencies post a single consolidated PDF, others post a folder of 12 attachments, still others post Word documents with embedded Excel sheets. Building a clean, normalized corpus across thousands of RFPs is a non-trivial engineering project, and SAM.gov’s terms of service govern bulk pulls. We did not undertake that project for this post. What we did do is sample.

Our sample

We pulled 100 randomly-sampled RFPs per quarter, posted between Q1 2024 and Q2 2025, across three NAICS clusters: 541 (professional services), 518 (data processing and hosting), and 336 (transportation equipment). 1,000 RFPs total, after de-duplication and removal of solicitations that turned out to be sources-sought notices or amendments rather than full RFPs.

For each RFP, we recorded:

  • Total word count of the solicitation document (Section L plus Section M plus the Statement of Work/PWS, where present).
  • Page-count limit on the proposal response (often stated in Section L).
  • Number of distinct requirements in Section L and Section M (we counted instances of “shall,” “will provide,” “must,” and the explicit “describe” / “include” instructions).
  • Number of attachments referenced.
  • Q&A round count, where Q&A had concluded by the time we sampled.

This is a sample, not a census. The directional findings below are exposed to selection bias from the random sample, the choice of NAICS clusters, and the inherent noise in counting “requirements” using regular expressions over a heterogeneous corpus. We will not claim percentage changes to the decimal point. We will describe the direction and magnitude.

What we found, directionally

Solicitation length is approximately flat, with a long tail

The median solicitation in our sample is in the range of 60–80 pages of substantive content (excluding attached terms-and-conditions templates and standard FAR clauses by reference). That median has not moved much across the nine quarters we sampled. The spread, however, has widened. The 90th percentile is materially longer in 2025–2025 than it was in 2024 — in the rough range of 180–220 pages versus 130–160 pages two years prior — driven primarily by IT-modernization and data-platform solicitations from a few specific agencies that have published increasingly elaborate technical-evaluation criteria.

A practical implication: if your team’s intake estimate is “the typical federal RFP is 60 pages,” that is still a defensible estimate of the median. If your team’s estimate is “all federal RFPs are around 60 pages,” your tail-risk planning is off. The longest 10% are getting longer, and they are concentrated in the technology-services cluster you most likely live in.

Requirement counts have grown noticeably

Across our sample, the median count of “shall” / “will provide” / “must” / “describe” instructions has trended up. We are reluctant to publish a precise percentage because the regex methodology is imperfect — a “shall” inside a quoted FAR clause counts the same as a “shall” in the customer’s own evaluation criteria, and our parser does not always disambiguate. Directionally, however, the median requirement count rose by what appears to be 15–25% across the two-year window, with most of the increase concentrated in cybersecurity, supply-chain, and data-handling requirements rather than in functional capability.

This matches what practitioners report. The VisibleThread observation that “rushing into writing without fully understanding the requirements is the leading cause of proposal failure” gets harder, not easier, as requirement counts grow. A 250-row compliance matrix is a 250-row compliance matrix; a 320-row matrix is materially more work.

Page-count caps haven’t materially loosened

The folk wisdom that “the page caps keep going up” is partly true and partly misleading. Headline page caps in our sample remained largely concentrated in familiar bands — 25, 40, 50, 75 pages — and the overall mix did not shift dramatically. What did shift is what those page counts exclude: more solicitations now define an excluded set of artifacts (resumes, past-performance write-ups, technical appendices) that don’t count against the headline cap. The effective response — including everything submitted as a separately-page-limited annex — has gotten longer even when the headline number didn’t.

This is why a writer who tracks the headline cap will tell you nothing has changed, and a proposal manager who tracks total submitted pages will tell you everything has gotten longer. Both are correct on their measurement.

Q&A rounds and response windows are flat

Two metrics that we expected to drift have been remarkably stable in our sample. The median number of Q&A rounds per solicitation is one. The median response window from RFP release to proposal due date sits in the 30–45 day range for full-and-open competitions, with shorter windows for set-aside actions. Neither has moved much across the two-year window.

This matters because in practice, when the RFP gets longer and the requirement count grows, but the response window stays the same length, the work density per response goes up. Which is what proposal teams report — and why the Quilt observation that sales engineers spend “100 to 300 hours per RFP response” has not gotten less true with experience.

Attachment counts have grown

The median attachment count rose visibly across the window. In our 2024 sample, the median solicitation came with somewhere in the range of 8–10 attached files. In our 2025 sample, the median is closer to 12–15. The growth is driven by ancillary artifacts — supplemental Q&A documents, separate cybersecurity appendices, separate small-business subcontracting plan templates, separate past-performance questionnaires that go directly to references rather than into the main response.

The implication for a proposal-software vendor is unsubtle: extraction has to handle “the RFP” as a folder, not a file. The team that treats the main solicitation PDF as the source of truth and ignores the 14 attached templates will discover three weeks in that the cybersecurity appendix had a 25-question survey embedded in it that nobody indexed.

What we cannot say from this data

A few honest gaps.

Win/loss is not in this sample. SAM.gov does not publicly publish award narratives or the losing proposal documents. We cannot correlate any of these length measurements with outcomes. A separate research note would have to bring in published award notices and the GAO bid protest data, which we may do in a future post.

Section structure variability is high. The “median page count” hides substantial variability between agencies. DoD solicitations are not GSA solicitations are not VA solicitations. A larger and properly-stratified sample would produce per-agency series; ours is too small for that.

Commercial-side data is not here. Federal RFPs are public; commercial RFPs are not. Whether the commercial side shows the same directional drift is a separate empirical question, and one we are unlikely to be able to answer with public data alone. The reasonable assumption is that it tracks federal — buyers copy each other, especially in regulated commercial markets — but assumption is not measurement.

The regex methodology is imperfect. Counting “shall” instances overcounts when boilerplate FAR clauses are quoted in-line and undercounts when the requirement is expressed in a structured table. A clean methodology would parse the structured-table contents separately. A team building a serious benchmark on this should plan to pay for that engineering work.

What the data is good for

It is good for two practical decisions a proposal team can make this quarter.

Recalibrate intake budgets. If your standard time-to-extract a federal RFP is set on a 2024 baseline, the 2025 RFP will take longer because there are more attachments, more requirements, and more Section M evaluation criteria to read. Adjust the hours allocated to intake by something like 20–30%. This is directional, not precise.

Recalibrate review cadence. A response to a 320-row compliance matrix needs more pink-team time than a response to a 250-row matrix. Linear scaling is roughly right. Teams that imported a 2024 review cadence into 2025 work without recalibration are running gold teams against drafts that haven’t been adequately red-teamed, which is one of the failure modes Lohfeld Consulting flags repeatedly.

What we’ll do next

We are likely to expand the sample to ~3,000 RFPs across additional NAICS clusters and run a clean section-aware parser rather than the regex pass we used for this post. The output will be a structured dataset of federal RFP-shape-and-size metrics with proper stratification by agency, NAICS, and contract vehicle. If we publish it, it will be alongside an honest note on what sampling methodology produced the numbers — same as this post. The folk wisdom on this category deserves a stronger empirical floor than it has, and we will keep adding to that floor as we go.

If you have suggestions on the methodology or want to flag a NAICS cluster we should add, the contact paths on the company page are the fastest way to reach the research team.

Sources

  1. 1. SAM.gov — Contract Opportunities
  2. 2. GAO — Bid Protest Annual Report to Congress (FY2024)
  3. 3. FAR 15.204 — Solicitation contents
  4. 4. VisibleThread — Government proposal writing