Five years of APMP salary data, in one chart
What the APMP salary surveys tell us — and don't — about proposal-team compensation over the last five years. A teardown of the public dataset, the methodology, and the gaps a careful reader has to keep in mind.
Every year or two, the Association of Proposal Management Professionals (APMP) publishes a salary survey of its membership. The survey is the closest thing the proposal industry has to a longitudinal compensation dataset, and it is referenced — sometimes accurately, sometimes not — across most of the practitioner-facing content in this category.
This post is a teardown of the dataset as a research artifact: what it actually contains, what it doesn’t, what trajectory it documents over the last five years, and what a careful reader should and shouldn’t conclude from it. We are deliberately not publishing specific dollar figures from the surveys in this post. The reason is that APMP’s data carries known caveats — self-selected respondents, varying response rates year-over-year, definitional shifts in role titles, geographic and currency normalization choices — that make point-in-time numbers more misleading than illuminating when stripped of context. The trajectory is the read. The exact numbers belong in the source surveys, where a reader can see the methodology page next to the chart.
What the APMP survey is, and what it isn’t
APMP’s salary survey is a self-reported compensation survey of its global membership. Respondents complete a structured questionnaire covering role title, years of experience, education, certifications (APMP’s own credential program produces several tiers of professional certification), geography, employer type (commercial vs. government contractor vs. consultancy vs. agency), team size, and total compensation including base, bonus, and equity where relevant.
Three things to understand about the dataset before reading any chart from it.
It is self-selected. APMP membership is voluntary. Respondents to the salary survey are a subset of the membership who chose to complete it. The respondent pool skews toward people with stable employment in roles they identify with — the dataset undercounts contract proposal writers, freelance bid consultants, and people in adjacent roles (capture management, technical writing) who don’t identify as proposal professionals.
Role titles are not consistent across employers. “Proposal Manager” at a 5,000-person federal contractor is often a senior IC role with a portfolio of pursuits. “Proposal Manager” at a 50-person SaaS company is often the entire proposal function — the only person doing this work, at any level. Both report into the survey under the same title. Year-over-year comparisons of “Proposal Manager” compensation are blending these populations, and the blend itself shifts as APMP’s membership composition shifts.
Geographic normalization is approximate. APMP’s membership is global; the surveys publish currency-normalized aggregates. Currency normalization is a methodological choice with known artifacts — the numbers will shift as exchange rates shift, which can produce apparent compensation movements that are exchange-rate noise rather than labor-market signal.
These caveats do not invalidate the dataset. They reframe what the dataset is good for. It is good for trajectory. It is bad for absolute-level comparisons across years.
The trajectory, qualitatively
Across the last five years of published surveys, the qualitative pattern has been consistent in three respects.
Median proposal-role compensation has trended modestly upward in nominal terms. This is what you would expect over a five-year window in a labor market that experienced material wage growth in the 2021-2024 period. The directional movement is consistent across most role tiers. Whether it represents real (inflation-adjusted) growth is sensitive to the inflation deflator a reader chooses to apply, and the surveys do not publish inflation-adjusted series. Anyone publishing “proposal manager comp grew X% in real terms over five years” is doing the deflator math themselves and should be transparent about it.
Senior roles widened their gap from junior roles. The compensation premium for senior proposal management, capture management, and proposal-operations leadership has expanded. This is consistent with the broader pattern in knowledge-work compensation over the period — the senior-vs.-junior gap widened across most professional categories. It is also consistent with what Quilt has written about RFP capacity pressure: senior people who can run the entire response process are scarcer relative to demand than junior writers, and the compensation gap reflects the scarcity.
Compensation growth is unevenly distributed across employer types. Federal-contractor proposal roles tend to track GS-equivalent or contractor-rate ceilings; commercial-tech proposal roles tend to track broader software-industry compensation trends. Over the five-year window, commercial-tech compensation grew faster off a higher base, while federal-contractor compensation grew more slowly off a comparable base. The result is a widening gap between the two segments that the survey aggregate masks if you read only the headline median.
What the surveys say about certification
APMP runs a tiered certification program (Foundation, Practitioner, Professional). The salary surveys consistently show a positive correlation between certification level and compensation. The correlation is real; the causal interpretation is contested. APMP’s own framing emphasizes the certification-as-driver story. A more cautious read is that certification is correlated with seniority, employer type (large federal contractors are more likely to require certification), and tenure — and those factors independently predict higher compensation.
A reader extracting career advice from the survey should understand the difference. Pursuing certification because it predicts compensation is reasonable. Pursuing certification because the survey shows certified-vs.-uncertified comp deltas of X dollars and assuming certification causes X dollars of additional comp is a misread of correlation as causation.
What the surveys say about team size
The surveys consistently report that compensation scales with team size — proposal leaders managing larger teams earn more. This is a tautology to some extent (more responsibility produces more compensation in most professional categories). The interesting read is the ratio: how much does a team-size step (say, from a 5-person team to a 15-person team) move compensation, vs. how much does an experience step (5 years to 10 years) move it?
The published cross-tabs are not deep enough to answer this cleanly. The surveys publish marginal cross-tabs (compensation by years of experience, compensation by team size) but not the joint distribution. A reader who wants to know whether a given combination of seniority and team size predicts above-median compensation cannot directly answer that from the published tables.
What the dataset does not tell you
This is where the post becomes more useful than the headline numbers, because the gaps in the data are large.
Win-rate-adjusted compensation. A proposal team that wins 40% of its bids and a proposal team that wins 15% of its bids are running materially different operations. The surveys do not link compensation to win-rate or pipeline outcomes. Whether high-comp roles correlate with high-win-rate teams is unknown from the public data.
Hours worked. Proposal work is famously variable in hours-per-week — the deadline-driven nature of bids produces compressed work cycles that distort an annualized hourly view. The surveys ask about salary; they do not normalize for hours. A “well-compensated” senior role on the survey may be working 60-hour weeks during three or four major pursuits a year and 25-hour weeks otherwise, or it may be working steady 40s on a high-velocity task-order pipeline. These are very different jobs at the same headline number.
Burnout and turnover. Lohfeld Consulting’s writing on proposal-process pain describes a workforce under sustained deadline pressure. The salary surveys do not capture turnover rates, voluntary attrition, or burnout indicators. A role that pays 90th-percentile and has 18-month median tenure is a different career than a role that pays 75th-percentile and has 6-year median tenure.
Compensation for adjacent roles. Capture managers, bid consultants, technical proposal writers, pricing analysts, color-team review leads — all are adjacent to or part of proposal-function work, and many of them are not captured in the “proposal management” role bucket the surveys foreground. A complete view of proposal-function compensation across an organization requires layering multiple datasets, which the APMP surveys alone don’t provide.
What a careful reader does with this dataset
Three things, in order.
Use the trajectory, not the levels. Five years of trend data is more reliable than any single year’s level. If you are negotiating compensation, the trajectory tells you whether your role is in a labor market that has been moving up; it doesn’t tell you what you should ask for in absolute terms.
Cross-reference with other datasets. Bureau of Labor Statistics data on writers and editors, industry-specific salary reports from federal-contractor analysts, and broader software-industry compensation surveys (where applicable) provide triangulation that APMP alone cannot. A compensation read built from a single source is fragile.
Talk to people in the role you’re targeting. This is the part that doesn’t go in a research post and that everyone underrates. Aggregated survey data is necessary but not sufficient. Five conversations with people two steps ahead of you in the same role at the same employer type will calibrate a number from any survey faster than reading another chart.
Why we wanted to publish this and didn’t
A separate honest note. We considered publishing a five-year chart in this post — a single image with year-on-year median compensation for three role tiers, sourced from the APMP surveys, with our methodology written out underneath.
We didn’t, for two reasons.
The data points are inside APMP’s published surveys, which are member-accessible. Republishing the specific numbers without permission and without the methodology context felt off. The survey reports themselves are the right place to read the numbers, alongside the response-rate disclosures and the methodology pages.
And the chart we would have produced would have suggested a tighter time series than the data actually supports. Different surveys in the five-year window had different respondent populations, slightly different role-bucket definitions, and shifting geographic mixes. Plotting them as a smooth line would have created an artifact of presentation rather than a finding.
The honest version of this post is the one you’ve just read: here is what the dataset is, here is what it tells you in shape, here is what it doesn’t tell you, here is where to look. If APMP runs a future survey with deeper methodological consistency and explicit cross-tabs, we’ll revisit. For now, “five years of APMP salary data, in one chart” is a chart that doesn’t exist in the way the title implied — and the more useful answer was to walk through why.
What we would build, if APMP asked
Two methodological additions that would meaningfully improve the dataset:
A panel structure — re-surveying a stable cohort of respondents year-over-year — would convert the cross-sectional survey into a longitudinal one and dramatically improve trajectory analysis. The current approach blends new entrants and exiting respondents in ways that obscure career-stage transitions.
A linked-outcomes section — optional self-reported pursuits closed, win rates, and team performance metrics — would let researchers connect compensation to outcomes. A version of this exists in some peer professional surveys (project-management, consulting) and would meaningfully strengthen what APMP’s data can answer.
Neither change is small. Both would meaningfully improve a dataset the industry already relies on. APMP has done good work over many years assembling what exists. The frontier is where it could go next.