Field notes

The draft review heatmap: which sections attract edits

A year of reviewer edit data across hundreds of drafts. Which sections attract the most edits in pink, red, and gold review — and what that pattern tells us about where craft weaknesses cluster.

Sarah Smith 7 min read Craft

A year of color-team review data across hundreds of drafts gives us something most proposal shops do not keep: a heatmap of where edits land. Not which kinds of edits, which I will come back to, but which sections attract the most edits across pink team, red team, and gold team. The pattern is consistent enough that it deserves naming.

The short version: four sections attract roughly 60% of all reviewer edits. The other 40% spreads thinly across everything else. The four are, in order of edit density per word:

  1. The executive summary.
  2. The technical approach section, specifically its opening two paragraphs.
  3. The past-performance narratives where a specific claim is made about outcomes.
  4. The management approach section when it names specific people.

This is a practitioner observation, not a published benchmark. I know of no industry survey that measures edit density per section across responses, and I would not trust one that claimed to — every shop edits differently. But the pattern in our sample is consistent enough, across enough responses from different teams, that it is worth reading as signal about where craft weakness clusters.

Why these four

The executive summary attracts edits because it is the section with the most leverage per word and the least structural support. It sits outside the technical and management volumes, so it cannot borrow their proof. It compresses the capture plan into one page, so every weak claim is visible. It gets read by the signing executive, who has their own language for the business and marks up anything that does not match. All four dynamics compound. The exec summary pillar walks this at length.

The technical approach section’s first two paragraphs attract edits because they are where the framing of the technical solution happens, separate from the content. A technical section that opens with “our proven methodology” and then describes a solid solution will get the first two paragraphs rewritten and the solution left mostly untouched. The Shipley tradition calls this the section’s “topic frame” and treats it as the single highest-leverage paragraph in any volume. Red-team reviewers agree — they edit it more than any other paragraph.

Past-performance narratives with outcome claims attract edits because the claims are where the response commits to specific numbers. A reviewer who is also a subject-matter owner on the referenced engagement will correct the number — not because the writer lied, but because the writer rounded, or compressed, or picked the most favorable framing the underlying evidence would support. The edits here are almost always downward on the specificity, as reviewers make claims more defensible.

The management approach section attracts edits when it names specific people because the named people have opinions about what the response says about them. Not vanity — calibration. A program lead who reads a section describing themselves as “managing a team of 25” will correct it to “managing a team of 25 across 4 delivery pods” because the pod structure is how they would describe the work. These edits are small but frequent, and they make the response ring true in the way only the named person’s own language can.

The 40% that spreads thinly

The other 40% of edits land everywhere else — compliance boilerplate, pricing narrative, appendices, cover letter, transmittal memo, section transitions, tables of contents. Not zero edits; just diffuse. Each of those sections gets a few edits per review; none of them dominate. The pattern is mechanical: the sections that are most formulaic attract the fewest edits, because there is less subjective judgment to exercise.

A corollary: if your compliance boilerplate or your pricing narrative is attracting a lot of edits in red team, something is structurally off. Either the section is not formulaic enough (you are writing prose where you should be filling a template) or the reviewers are miscalibrated (they are catching cosmetic things instead of the load-bearing content). Either is worth investigating.

What the heatmap tells us about craft

Three observations, each practical.

Edit density is not the same as edit importance. The sections that attract the most edits are not always the sections where the edits matter most. A dozen edits to an executive summary is expected and productive; a dozen edits to a pricing narrative late in gold team is usually a fire. Track both edit count and edit severity. A single edit to a pricing line can be the edit that saves the bid.

The four hot sections are predictable enough to invest in preventively. If 60% of reviewer effort is going to four sections, those four sections deserve 60% of drafting effort. Many shops drain drafting effort across all sections equally and then watch reviewers re-edit the hot sections to shipping standard. A better allocation: draft the hot sections later in the schedule with senior writers, draft the formulaic sections earlier with junior writers, and set the review rubric to match.

Reviewer effort on the hot sections is the best investment in winning. The Shipley posts on color team review make this point repeatedly — reviews are not calendar events, they are the single highest-leverage discipline in the response. Our data agrees. Bids where the gold team spent more than half its time on the executive summary and the first two paragraphs of the technical volume won at noticeably higher rates than bids where the gold team distributed its attention evenly. Not a controlled experiment; a strong pattern in practitioner data. The Bid Lab writeup on right-sizing reviews argues for the same re-allocation in different language.

Two cautions

The heatmap is a description of what reviewers are doing, not a prescription for what writers should be doing. The causality could run either way: reviewers edit the hot sections more because those sections matter more, or the sections attract more edits because the reviewers have been trained to look there. Both are probably true. The practical response is the same either way — the hot sections deserve disproportionate craft attention. But don’t confuse the heatmap for an objective measure of quality.

The second caution: sample skew. The data is from teams using a drafting tool that tracks edits. Teams that don’t track edits, teams that edit in email or Word comments, teams that review via phone calls and back-channel — none of that is in the sample. The pattern could look different for shops with different review cultures. Treat the numbers as directional.

What we do with this internally

We use the heatmap to stage reviewer effort. At pink team, the rubric concentrates on structure — does the response match the compliance matrix, does the summary have all five parts, are the win themes threaded. At red team, the rubric concentrates on the hot sections — executive summary, technical-approach opening, outcome-claim specificity. At gold team, the rubric concentrates on the hot sections again, now with a “can any sentence be disproven” standard applied to every claim. The reviewers know the pattern because we ran the exercise of looking at a year of edit data.

You can run the same exercise on your own data, even if you have less of it. Pull the last 20 reviewed drafts, count the edits per section, and see whether the top four sections in your sample are the same four in ours. If they are, you have a staging pattern ready to use. If they aren’t, the difference is interesting — it tells you something about either your response category or your review discipline that the generic benchmark would have hidden.

The heatmap is a tool for concentrating attention where attention actually moves outcomes. The hardest discipline in proposal work is choosing not to spread effort evenly. A year of data says the concentration pays.

Sarah Smith is the house pen for PursuitAgent’s proposal-craft posts. It’s a composite voice, not a single person. Views reflect PursuitAgent’s position; war stories are drawn from real experience in the proposal industry without being tied to a specific employer or engagement.

Sources

  1. 1. Shipley Proposal Guide (7th ed.), Shipley Associates
  2. 2. Shipley — Color team reviews
  3. 3. Bid Lab — Proposal color team reviews explained