A cross-cut of 30 municipal RFPs
Patterns in sub-state procurement across 30 municipal RFPs posted in Q4 2025. Where buyer-side guidance is catching up to federal norms, where it's still behind, and what the data says for a vendor deciding whether to chase this tier.
We pulled 30 municipal RFPs posted between October and December 2025 from public procurement portals across 12 states. “Municipal” here means city, county, and special-district — below the state tier, above the school-district tier. The cross-cut is not random; we selected for RFPs above $250K estimated contract value because below that threshold, the mechanics get thin.
The headline: municipal RFPs are not small federal RFPs. They share some structural DNA — compliance matrices, evaluation rubrics, Q&A windows — and they differ in ways that a vendor crossing tiers for the first time should know about before they bid.
Structure, at a glance
| Metric | Median | Federal comparison (our prior work) |
|---|---|---|
| Total RFP page count | 38 | 68 |
| Evaluation-criteria section length | 4 pages | 11 pages |
| Number of distinct scored dimensions | 6 | 12 |
| Q&A window (days) | 10 | 21 |
| Total response window (days) | 28 | 42 |
| Requires insurance certificate up front | 29 / 30 | 30 / 30 |
| Requires prior-performance references | 27 / 30 | 30 / 30 |
| Evaluation rubric published | 18 / 30 | 28 / 30 |
| Uses compliance-language verbs (shall/must) consistently | 11 / 30 | 28 / 30 |
Two things stand out. Response windows are shorter — 28 days versus 42. Compliance language is inconsistent — only 11 of 30 municipal RFPs used shall/must discipline across the whole document, versus 28 of 30 in our prior federal cross-cut.
The shorter window means the proposal team has less time to do capture work on an RFP that might have an unfamiliar procurement office behind it. The weaker compliance language means the evaluation criteria are often ambiguous in ways the buyer hasn’t realized until proposals come in.
Where buyer-side guidance is catching up
A subset of the 30 RFPs — eight of them, drawn from Austin, San Jose, King County, Denver, Arlington County, Durham, Boulder, and Multnomah County — published an evaluation rubric with numeric weights, a scoring methodology, and a public record of prior-contract awards in the same category. These RFPs read like scaled-down federal ones. A vendor evaluating them can make a bid/no-bid decision against real data.
The other 22 didn’t. Of those, 14 published evaluation criteria in narrative form (“the Committee will evaluate on Price, Technical Approach, and Team Qualifications”) without weights. Eight published weights but did not publish prior awards or a scoring methodology.
The “narrative criteria, no weights” pattern is the one to watch. It’s the procurement equivalent of an unpublished grading rubric, and it produces selection decisions that the losing bidder cannot usefully contest. Municipal procurement offices we talked to off-record said they are aware of this and are adopting weighted rubrics as they modernize. The pace is uneven.
Where buyer-side guidance is still behind
Five patterns:
Late-breaking addenda. Seven of the 30 RFPs issued addenda in the last five business days of the response window. Federal RFPs do this too, but federal windows are longer, so five days out is still a week of working time. In a 28-day municipal window, a late addendum can invalidate a compliance matrix that’s three days from submission.
Incomplete Q&A publication. Thirteen of 30 published the Q&A responses as a consolidated document; the others answered individual questions by email to the questioner without sharing responses publicly. That’s not legal in every jurisdiction (state procurement codes vary) but it’s common, and it produces an information asymmetry between incumbent bidders who know to ask and newcomers who don’t.
Evidence formats that don’t travel. A municipal RFP asked for “proof of cybersecurity posture.” A reasonable response includes SOC 2, ISO 27001, or an equivalent attestation. Three of the 30 required specific in-state certifications that don’t exist outside the state; two more required evidence that maps to no industry framework. A vendor from out of state either writes a narrative explaining the absence or doesn’t bid.
Insurance requirements that exceed contract value. Fourteen of 30 required $5M or higher in general liability and cyber liability insurance. For a $400K contract, that’s an insurance profile that many mid-market vendors can’t meet without riders. The implicit effect is to filter out small vendors; whether that’s the intent varies by buyer.
Past-performance requirements scoped to in-state work. Nine of 30 preferred or required past-performance references from in-state customers. This is a legitimate preference for a procurement office that has been burned by out-of-state vendors; it’s also a structural incumbent advantage. Vendors new to a region should flag this before bidding.
What the data says for a vendor deciding whether to chase this tier
Three findings:
Win rates are plausibly higher than federal. We don’t have direct win-rate data on the 30 RFPs yet; we’ll publish a follow-up once awards are public. Anecdotally, municipal RFPs draw fewer bidders — we counted publicly-visible bidder lists for 12 of the 30 and found a median of 4 bidders per RFP, versus 9 in the federal cross-cut. Fewer bidders, shorter cycles, higher apparent win rate.
Total addressable contract value per RFP is lower. Median municipal contract is $720K of first-year value (where disclosed); median federal is $4.1M. The math has to work at the smaller per-bid ACV.
Relationship work pays off differently. Federal capture depends on GovCon relationships and agency-specific intelligence. Municipal capture depends on showing up at pre-bid meetings, answering Q&A thoroughly, and being known to the procurement office. Both require investment; the municipal version is less specialized and more local.
The honest summary: if you’re a mid-market vendor with an in-region sales presence, the 30-RFP sample suggests municipal tier is a real market with manageable mechanics. If you’re out of region and relying on a remote proposal team, the information asymmetries in the buyer-side guidance will cost you bids you could have otherwise won.
Method
30 RFPs pulled from OpenGov, DemandStar, BidNet, and direct city/county portals between October 1 and December 31, 2025. RFPs selected for estimated contract value above $250K. Categories covered: IT services (11), professional services (8), construction (5), software (4), security (2). Page counts measured on the core RFP document, excluding attachments. Compliance-language analysis ran a text scan for shall, must, will, should, and may, then a manual review of ambiguity.
We did not anonymize. A full list of the 30 RFPs, with links to the original solicitations, is available on request. Email research@bidforge.com.
Sources
- 1. PursuitAgent — A cross-cut of federal RFP word counts, 2024–2025
- 2. PursuitAgent — The 42-page RFP the state of Georgia posted last week
- 3. NIGP — The Institute for Public Procurement
- 4. NASPO — National Association of State Procurement Officials
- 5. US Census — State and Local Government Finances 2022