Field notes

Discriminators: the word your evaluator was trained on

APMP calls them discriminators. Most teams don't write them. Three real examples from awarded proposals — what they did, why they worked.

Sarah Smith 6 min read Craft

A discriminator is a capability you have that your competitor either does not have or cannot honestly claim. APMP defines it precisely. Most proposal teams do not write them. They write benefits — “improved efficiency,” “reduced risk” — and they call those benefits discriminators because nobody on the proposal team has been told the difference.

Evaluators were trained on the difference. They scan for it.

This post is three examples of discriminators from real awarded proposals — anonymized to remove the customer’s name but otherwise intact — and what they did that mere benefits do not.

Why benefits are not discriminators

A benefit is a thing your offer does. “Reduces incident response time.” “Cuts onboarding from 90 days to 30.” “Provides 24/7 support.”

A discriminator is a thing your offer does that the alternative does not. The framing has to be comparative. If a competitor can read your sentence and say “yes, we do that too,” it isn’t a discriminator. It’s a feature.

The PropLibrary swap test is the cleanest version of this rule: if you can swap your company name with another vendor’s and the sentence is still true, the sentence isn’t a discriminator. It’s filler.

The reason this matters: evaluators score against a rubric, and the rubric is usually written by someone who has read enough proposals to know what unique-to-the-bidder language looks like. A page of benefits reads as identical to the previous bidder’s page of benefits. A discriminator stops the evaluator’s scan.

Example 1 — the audit-trail discriminator

The setting. A state government RFP for case-management software. The RFP asked, in section 4.7.2, for the bidder’s approach to “data integrity and audit logging.”

The weak version (rejected at red team). The bidder’s first draft said:

Our platform provides comprehensive audit logging across all user actions, ensuring compliance with state and federal data-integrity standards. Robust logging is a cornerstone of our security architecture.

This is not a discriminator. Every competitor has audit logs. “Comprehensive,” “robust,” and “cornerstone” are filler words. The sentence describes a checkbox, not an advantage.

The discriminator version (shipped, won).

Our audit log captures every read of every record, not just every write. Caseworker A opening a sealed juvenile record at 11:47 p.m. on a Saturday is logged with the same fidelity as caseworker A editing that record. The current state vendor logs only writes; the migration path we describe in section 4.7.3 surfaces three years of read-only access patterns the current system cannot reproduce.

What this version did:

  • It named a specific behavior the competitor did not have (read logging).
  • It tied the behavior to a real evaluator concern (juvenile records access).
  • It referenced the incumbent vendor’s gap by name via the migration section, without naming the vendor directly.
  • It promised something concrete: three years of access patterns surfaced post-migration.

The evaluator’s scoring notes — released in debrief — flagged this paragraph specifically as the moment the bid pulled ahead.

Example 2 — the response-time discriminator

The setting. A managed-services RFP for a Fortune 500 customer. The customer’s RFP specified a four-hour SLA on critical incidents.

The weak version.

We exceed your SLA requirement with industry-leading response times. Our 24/7 NOC, staffed with senior engineers, ensures rapid response to critical incidents.

Again, not a discriminator. Every managed-services bidder will claim 24/7 staffing. “Industry-leading” is meaningless without a comparator.

The discriminator version.

We commit to a two-hour response time on critical incidents — half your specified SLA. We can commit to two hours because our shift structure includes a senior engineer on the bridge before the first ticket page, not after. The financial penalty for missing the two-hour mark is included in section 7 of the MSA: 2% of monthly fees per incident, capped at 25% per month. The four-hour-SLA bidders structurally cannot match this; their cost model assumes a 90-minute pager-to-bridge interval.

What this version did:

  • It named a number more aggressive than the floor (two hours vs. four).
  • It explained why the number was achievable (shift structure).
  • It put money on the line (financial penalty written into the contract).
  • It explained why competitors cannot match (their cost model — a structural claim, not a personal attack).

A discriminator that names a structural reason competitors can’t match is the strongest form. It moves the argument from “we’re better” to “they can’t.”

Example 3 — the regulatory-history discriminator

The setting. A financial-services RFP for a data-processing platform. The customer is a regulated entity in a jurisdiction with a specific compliance regime.

The weak version.

We have extensive experience supporting regulated financial institutions and a deep understanding of the compliance landscape.

Could be any vendor. Adjectives doing the work of nouns.

The discriminator version.

Our platform has cleared 14 regulatory examinations across the last six years — six in the EU, five in the UK, three in Singapore. We have never been the subject of a remediation order. Our compliance team includes two former examiners from the FCA. The full examination history, with regulator names and outcomes, is in appendix C.

What this version did:

  • Specific numbers (14, 6, 5, 3, 6 years).
  • Specific regulators named.
  • A negative claim that’s hard to fake (never the subject of a remediation order).
  • A team-composition claim with a verifiable shape (two former examiners, named in the appendix).
  • An evidence pointer (appendix C with the full history).

This is the form an evaluator can fact-check during the review. That’s the standard.

How to test a discriminator

Three tests, in order:

  1. The swap test. Replace the bidder name with a competitor’s. Does the sentence still read true? If yes, it isn’t a discriminator.
  2. The “and?” test. Read the sentence to a skeptical colleague who responds “and?” to every claim. If the answer to “and?” is more adjectives, the sentence isn’t done. The answer needs to be a comparison: “and the alternative is X.”
  3. The proof test. Can the claim be checked? If a buyer’s procurement team called your reference, would the reference confirm the specific number, the specific behavior, the specific structural claim?

If a sentence passes all three, it is a discriminator. If it fails any one of them, it’s a benefit dressed in stronger words.

What this changes about how we draft

In the eight-stage pipeline post I described capture as the place where win themes are set. Discriminators are the specific, evaluator-grade form of a win theme — the language that makes the win theme do work in the document. A capture plan that lists three win themes without listing the discriminators behind them is half a capture plan.

In practice, every win theme in our internal capture template now has two slots beneath it: “structural reason competitor cannot match” and “evidence pointer.” If either slot is empty, the win theme isn’t ready to ship into the draft.

The takeaway

The word “discriminator” is APMP jargon. The thing the word names is the difference between a proposal that wins and a proposal that places. Your evaluator was trained on the distinction; your draft should be too.

Sources

  1. 1. APMP Body of Knowledge — Discriminators and ghosts
  2. 2. PropLibrary — Proposal win themes: the good, the bad, and six examples
  3. 3. Shipley — Color team reviews
  4. 4. PursuitAgent — Win themes are verbs, not adjectives