Past performance that actually maps to the scope
Selecting which prior contracts to cite is a craft skill, not a database query. Three worked examples of past-performance selection — what to cite, what to omit, why the relevance map matters more than the impressive number.
The past-performance section of a proposal is the section most teams treat as a database query. They have a list of cited references in a tracker somewhere; they pick the ones with the largest contract values, the most prestigious customers, or the most recent dates; they paste the PPI forms in. The section is technically complete. It is also, in most cases, scoring well below its potential.
Past-performance selection is a craft skill. The right contract to cite is the one that maps cleanly to the scope of the current bid, not the one that most flatters the company’s resume. The mapping is what gets scored, in best-value tradeoffs. The flattering credentials, when they do not map, get scored as filler.
Three worked examples below. All composite, none drawn from a specific bid. The pattern in each one is the lesson.
The selection question
Before the examples, the question to ask before citing any past contract.
Does this contract demonstrate that we have done a thing similar to what the new RFP is asking us to do?
Three sub-questions inside that.
Similar in scope. A $50M contract for IT staff augmentation does not demonstrate competence at a $5M contract for application modernization. Different work. The $5M contract you delivered three years ago is more relevant than the $50M one delivered last quarter, if the $5M one is on-scope.
Similar in customer profile. Federal evaluators score federal past performance higher than commercial. State-government evaluators score state and local past performance higher than federal. The customer profile is part of the scope.
Similar in delivery model. A contract you delivered as a prime is structurally different from one you delivered as a sub. A fixed-price delivery is structurally different from a T&M. If the new RFP is fixed-price prime, the cited contract should be too — or you should be explicit about the difference and why your experience translates.
The selection job is to find three to seven contracts that score high on all three of those dimensions. Not the impressive ones. The mapped ones.
Example one — when the obvious choice is wrong
The new RFP is a state-government cloud migration project. Scope: migrate the agency’s legacy case-management system from on-prem to AWS, with parallel operation during a 90-day cutover, $4M ceiling.
The team has three candidate past contracts to cite.
Contract A. $50M federal cloud migration, completed two years ago, prime, multi-year, multiple agencies. Headline-impressive.
Contract B. $3.8M state-government case-management system implementation, completed last year, prime, fixed-price, similar agency type. Less impressive on paper.
Contract C. $12M commercial cloud migration for a healthcare company, in flight, prime, T&M, similar technical complexity to the new RFP.
Naive selection picks A. Headline numbers, recent enough, federal pedigree. The story is “we can do cloud migrations at scale, look at this big one.”
The right selection picks B. State-government context — the same evaluator population, the same procurement mechanics, the same kind of agency reporting requirements. The dollar value is smaller but the contract is on-scope to the new bid.
C is third, behind B. The technical complexity matches but the customer profile (commercial, healthcare) does not, and the delivery model (T&M) does not match the new RFP’s fixed-price expectation.
The cited list is B, then C, then A — in that order, with the editorial weight on B. The narrative around B is “we delivered to a peer agency, fixed-price, with the same kind of governance the current RFP describes.” That is what the evaluator scores. A flattering federal contract that does not map is scored as filler, not as evidence.
Example two — when the mapping is partial
The new RFP is a federal DoD software development project. Scope: secure DevSecOps pipeline for a classified program, 18-month period of performance, secret-cleared personnel required.
The team has two candidate contracts.
Contract D. $25M unclassified DoD software development, completed three years ago, prime, no clearance requirements. Strong on the technical-approach dimension; weak on the security/clearance dimension.
Contract E. $4M classified federal civilian intelligence-community engagement, completed last year, sub to a larger prime, secret-cleared personnel throughout. Weak on the prime-experience dimension; strong on the clearance dimension.
Neither contract is a perfect map. The right move is not to pick one and pretend it is the perfect example. The right move is to cite both and write the relevance narrative explicitly.
For contract D: “Demonstrates our DevSecOps pipeline engineering at DoD scale, with the technical complexity comparable to the current RFP. Note the unclassified context — for cleared work, see Contract E.”
For contract E: “Demonstrates our cleared-personnel delivery in a federal classified environment with security posture matching the current RFP’s requirements. Note the subcontract structure — for prime delivery, see Contract D.”
The two contracts together cover the scope dimensions that no single contract covers. The narrative makes the coverage visible. Evaluators reward visibility — they do not have time to map your past performance themselves, and the team that does it for them gets credit.
The fail mode here is citing only D and hoping the clearance question does not come up, or citing only E and downplaying the prime question. Both are evident omissions to a reading evaluator. The honest, mapped narrative scores higher than the optimistically-cropped one.
Example three — when the right contract is small
The new RFP is a commercial enterprise procurement for a workflow-automation platform. Scope: integrate with the buyer’s existing Salesforce, ServiceNow, and Workday environments, three-year term, annual contract value approximately $1.2M.
The team has many large contracts. The relevant prior contract is $400K, completed 18 months ago, with a customer who had the same trio of integrations.
The team’s instinct is to cite the larger contracts first because they make the company look bigger. The right selection cites the $400K contract first, because it directly answers the new buyer’s most specific question: “have you done this exact integration shape before.”
The narrative: “We integrated Salesforce, ServiceNow, and Workday for [comparable customer], delivered on schedule, with the integration patterns described in Section 3.2 of this response. The technical approach below is built on the lessons from that engagement.”
The larger contracts go in the supporting position, with brief narratives demonstrating scale capability. The headline is the $400K contract because it is the on-scope one.
This goes against the instinct. The instinct says “lead with our biggest.” The discipline says “lead with our most relevant.” Evaluators read past-performance sections the same way they read any other proposal section — looking for evidence that the vendor has done this thing, not for general credentials.
The relevance map
Before drafting the past-performance section, build a small table. Three columns:
- The scope dimensions the new RFP scores against (typically: technical work type, customer profile, delivery model, dollar value, period of performance, special requirements like clearances or certifications).
- The candidate prior contracts.
- A score per (dimension, contract) cell — how strongly that contract demonstrates that scope dimension.
The table makes the selection mechanical. The contracts that score highly across the most cells are the ones to cite. The narrative for each cited contract is the explanation of which scope dimensions it demonstrates and which it does not.
The table is the part most teams skip. The selection then defaults to the company’s resume, ranked by impressiveness. Impressiveness is not the variable being scored.
What the evaluator is doing
In federal procurement, the past-performance evaluation is typically scored on:
- Recency. How recent the contract is (typically a 3-to-5-year window).
- Relevance. How well the cited work maps to the new scope.
- Quality. How well the vendor performed (often via PPI forms or CPARS ratings).
Recency is mechanical — either the contract is in the window or it is not. Quality is determined externally — the vendor cannot retroactively improve a CPARS rating. The variable the vendor controls is relevance, and relevance is the variable the cited-contract selection moves.
A mapped relevance narrative — “this contract demonstrates X dimension of the scope, here is the specific evidence” — scores higher than an unmapped citation. The evaluator’s job is easier, the score reflects that, and your team did the work the evaluator would otherwise have to do.
What this is not
It is not a license to over-claim. The mapping has to be honest. A contract that does not actually demonstrate the scope dimension cannot be argued into demonstrating it through clever narrative. Evaluators read past-performance sections looking for stretch — claims that exceed the evidence — and they downgrade for it. Honesty is the floor.
It is not a substitute for actually having done the work. If the company has not done the work the new RFP scopes, the past-performance section will not save the bid. It will, at best, be honest about the relevance gaps. Some bids the team should not be bidding on, and the past-performance gap is the early warning.
The short version
The past-performance section is a relevance map, not a resume. The contracts to cite are the ones that demonstrate the specific scope dimensions the new RFP scores against. The narrative is the explicit map between the cited contract and the new scope. The largest, most impressive prior contracts are sometimes the right ones to cite and sometimes not — the test is mapping, not flattery.
Build the relevance table before you start writing. The selection takes 30 minutes when the table is in front of you. It takes hours, and produces a worse outcome, when the team is selecting from instinct.