A grid for past-performance writeups
The table that turns 20 disorganized references into submit-ready past-performance prose. Four rows, six columns, and the discipline that makes every reference tell the same story from the buyer's angle.
Past-performance sections lose. Evaluators rank them as the second most-scored factor in most federal RFPs. Most teams write them worst.
I have watched three common failure modes wreck past-performance sections repeatedly. The first: every reference is written in a different voice, because three SMEs wrote three references. The second: the references are organized around the team’s project categories, not around the evaluator’s rubric. The third: the outcome sentence on each reference is adjective-heavy and measurement-free.
A grid fixes all three. This is the one I use.
The grid
Four rows per reference. Six columns.
| Row | Column 1: Customer | Column 2: Scope | Column 3: Dollar & Vehicle | Column 4: Team & Role | Column 5: Outcome | Column 6: Tie to Current RFP |
|---|---|---|---|---|---|---|
| Identifiers | Agency name, bureau, POC name+title if citable | One-line scope | Contract number, ceiling, type (FFP/T&M/CPFF) | Prime vs sub, role, team size | One-sentence outcome | One sentence linking to current RFP |
| Narrative | Who, when (dates), what phase | What was being delivered, in three sentences | Value managed, % of ceiling executed | Your specific responsibilities, specific staff | Three metrics with numbers | Which current-RFP requirement this reference proves |
| Risk | Any relationship caveats | Scope changes during contract | Overruns, deobligations | Team continuity (did staff stay?) | Downside risks realized | None — the tie is claim, not risk |
| Evidence | CPARS rating if available | Contract mods | Invoicing history | Resume/clearance references | Source doc (CPARS, customer letter, metrics dashboard) | Requirement number from compliance matrix |
Four rows × six columns = 24 cells per reference. You fill the grid once per reference; the prose writes itself from the grid.
How the grid changes what you write
Column 1 forces you to name a human. “We worked for the Department of Veterans Affairs” is a sentence your competitor also wrote. “We worked for the Office of Information and Technology at VA, under Deputy CIO [name, citable if the contract is public], from March 2022 through November 2024” is a sentence that tells the evaluator you know the customer specifically.
Column 2 forces a one-line scope before the three-sentence version. The evaluator skimming your past-performance section reads the one-liner. They read the three sentences only if the one-liner earns their attention. Write the one-liner first, make it earn.
Column 3 forces the contract shape. Federal evaluators are comparing your past-performance contract types against the current RFP’s contract type. An FFP reference supporting a T&M proposal is weaker than an FFP reference supporting another FFP — the evaluator knows this and adjusts internally. You should flag the match yourself rather than leaving it for them to notice or miss.
Column 4 forces team continuity. If the team on the reference contract is not the team proposing for the current work, the reference is weaker than it looks. The grid makes you name staff, which makes the discontinuity visible. Sometimes the right move is to drop the reference and substitute a different one where the team continuity is real.
Column 5 forces numbers. “Improved operational efficiency” does not survive the grid. The outcome cell demands three metrics with numbers. If the contract did not produce measurable outcomes, the reference does not belong in the submission. Every column-5 metric gets a citation in the evidence row (CPARS rating, customer letter, a metric from the program’s annual report).
Column 6 is the one most teams skip. Every past-performance reference should answer the implicit evaluator question “why is this reference in the proposal.” The tie-to-current-RFP cell makes you write that answer explicitly. It becomes the closing sentence of the reference narrative — “this engagement demonstrates our team’s ability to execute on requirement 3.4.2 of the current RFP: integration of legacy HR platforms with cloud-native identity providers.”
The discipline: 20 references become three
A mid-sized proposal shop typically has 15-30 past-performance candidates. The grid is the triage tool.
Step 1: Fill the grid for every candidate. This takes 20-30 minutes per reference. Fast.
Step 2: Read the current RFP’s past-performance evaluation section. Identify the factors: customer type, dollar range, recency, specific capabilities required.
Step 3: For each filled-grid reference, score it 1-5 on match to the current RFP’s factors. References scoring 4 or 5 on every factor are your three. References scoring 3 or lower on any factor do not go in.
Step 4: For the three selected, rewrite column 6 to tie specifically to the current RFP’s evaluated capabilities. This is where the past-performance section starts earning points.
Why it works
The grid enforces three things an unaided writer rarely enforces.
First, uniform voice. Every reference has the same structure, which means the reader’s cognitive load is the same on each one. The references compare cleanly because they are structured to be compared.
Second, evaluator-centric framing. Column 6 is literally the evaluator’s question — the rubric-tie. Writing the rubric-tie explicitly forces you to have one. Many references that felt strong in the team’s head turn out not to have one; the grid reveals this.
Third, measurability. Column 5 rejects adjectives. A reference that cannot produce three metrics is not rebuttal-ready when the evaluator asks “what did this contract actually deliver.”
What the grid does not do
The grid does not make a weak reference strong. A contract that was executed poorly, with overruns and scope changes and team turnover, fills the grid honestly and looks weak — which is the correct outcome. Some teams try to paper over the weaknesses; the grid defeats that by making every field visible.
The grid does not produce a good writer. It produces a structured input. Turning the grid into prose still requires judgment — sentence rhythm, emphasis, voice. The grid gives the writer a map; the writer still writes.
A small worked example
For a reference at VA on HR modernization:
- C1 Identifiers: Department of Veterans Affairs, Office of IT (OI&T), Deputy CIO office; March 2022 — November 2024; closeout phase.
- C2 Scope narrative: Migration of legacy VA HR-Smart platform from on-premise Oracle HCM to cloud-native managed service; identity federation with VA SSO; data-migration of ~400K employee records; change management across 18 regional HR offices.
- C3 Dollar & vehicle: $24.8M ceiling, $23.1M executed, CPFF task order under OASIS+ IDIQ.
- C4 Team & role: Prime; 38-person team at peak; program manager and three senior architects still on current roster (resumes in Appendix C).
- C5 Outcome narrative: HR-Smart cutover completed 6 weeks early; post-migration Tier-1 HR ticket volume dropped 34% against 6-month baseline; 99.94% data integrity validated in post-migration audit (source: VA post-migration report, Feb 2025).
- C6 Tie to current RFP: Directly demonstrates capability for requirement 3.4.2 (legacy-system modernization with cloud-native identity integration) and requirement 4.1 (enterprise HR data migration at scale).
The prose version writes itself from these six cells, in roughly 180 words, with every fact citable.
Takeaway
Past-performance writing fails when it is narrative-first instead of structure-first. The grid is four rows and six columns. It takes 30 minutes per reference. It turns 20 loosely-organized references into three tightly-written ones, each explicitly tied to the current RFP’s rubric. Nothing fancy. Nothing new. Just the discipline most teams skip when they write past-performance the night before submission.
See past performance in three sentences for the compressed version of the writeup, and past performance that actually maps for more on the rubric-tie discipline.