A capture manager we know once lost a $12M contract because his team wrote an Outstanding technical approach — for an LPTA evaluation. They spent 400 hours crafting the most innovative, detailed, differentiated proposal in the competitive range. None of it mattered. The government awarded to the lowest-priced technically acceptable offeror, exactly as Section M said they would. The team just hadn't read Section M carefully enough to change their strategy.
This happens more often than anyone in GovCon wants to admit. Teams spend weeks writing proposals optimized for the wrong evaluation methodology. They emphasize innovation when the government is scoring on confidence. They write for "Outstanding" when the evaluation only distinguishes between "Acceptable" and "Unacceptable." They allocate 30 pages to management approach when it's the lowest-weighted factor.
Section M is the cheat sheet the government gives you for free. It tells you exactly how they're going to score your proposal. This guide explains how to read it, what the different evaluation methodologies actually mean for your proposal strategy, and how to use this intelligence to make smarter pursuit decisions before you write a single word.
Where the full evaluation picture lives (it's not just Section M)
The evaluation criteria are described in Section M, but the full picture requires cross-referencing three sections:
Section L tells you what to submit and how to structure each volume. The order and emphasis of Section L often mirrors the evaluation factors — if Section L asks for 30 pages on Technical Approach and 5 pages on Management, that's a signal about relative importance even before you read Section M.
Section M tells you how proposals will be scored: the methodology, the factors and subfactors, their relative importance, and the rating scale.
Section C (the PWS/SOW) tells you what the work actually involves. Your technical approach is evaluated against these requirements using the criteria in Section M.
A common and expensive mistake is reading these in isolation. Section L says "describe your approach." Section M says "technical approach will be evaluated for innovation and risk mitigation." Section C describes a complex, multi-site IT modernization. Only by reading all three together do you understand that the evaluators want an innovative approach to a specific, complex problem — not a generic description of your company's capabilities.
The three evaluation methodologies (and why getting this wrong costs you everything)
Best value trade-off: when quality can beat price
This is the most common methodology for complex acquisitions. The government evaluates both technical merit and price, and may award to a higher-priced offeror if the technical superiority justifies the premium. The solicitation will state the relationship: "technical factors are significantly more important than price" or "technical and price are approximately equal."
Your strategy: invest heavily in demonstrating technical excellence, innovation, and deep understanding of the customer's mission. A compelling technical story that scores "Outstanding" can overcome a price that's 10-15% higher than the competition. But don't ignore price entirely — the source selection authority has to justify the premium, and a dramatic price outlier makes that justification difficult.
Lowest price technically acceptable (LPTA): when good enough wins
Under LPTA, every proposal that meets the minimum technical requirements is rated "Acceptable," and the contract goes to the cheapest acceptable offeror. There is no technical trade-off. An Outstanding technical approach provides zero competitive advantage over an Acceptable one.
This is where the mistake from our opening story becomes clear. Every page your team writes beyond what's needed to pass the acceptability bar is wasted effort. Every innovative approach, every discriminator, every evidence of excellence — all irrelevant. The entire competition is decided by your rate card.
Your strategy: meet the minimum technical requirements clearly and concisely, then win on price. Allocate your best proposal resources to the pricing volume instead of the technical volume. If your rates aren't competitive for LPTA evaluations, consider whether this opportunity belongs in your pipeline at all.
Highest technically rated with fair and reasonable price: when excellence is everything
This methodology awards to the highest-rated technical proposal whose price is fair and reasonable. The government doesn't weigh technical against price — they rank proposals by technical merit and confirm the winner's price isn't unreasonable.
Your strategy: go all-in on technical quality. This is where innovation, specific examples, named personnel, and a detailed understanding of the customer's mission create maximum competitive separation. Price just needs to be defensible, not lowest.
What evaluators are actually scoring (the difference between "Good" and "Outstanding")
Most solicitations use adjectival ratings: Outstanding, Good, Acceptable, Marginal, Unacceptable. Understanding what separates each level is the difference between a proposal that wins and one that scores in the competitive middle.
"Acceptable" means you met the requirements as stated. You demonstrated you can do the work. There are no deficiencies. This is the floor — every competitive proposal achieves this.
"Good" means you demonstrated strengths beyond the minimum. You showed a better-than-average understanding and approach. Maybe you addressed a risk the solicitation didn't explicitly mention, or proposed a methodology that goes slightly beyond what was asked for.
"Outstanding" means you significantly exceeded requirements. You demonstrated exceptional understanding, proposed innovative solutions, provided specific evidence of superior capability, and gave the evaluator high confidence that you'll not only perform the work but exceed expectations. Outstanding requires concrete, specific evidence — not superlatives.
The gap between "Good" and "Outstanding" is where contracts are won and lost. "Good" says "we can do this." "Outstanding" says "we've done this before, here's exactly how, and here's what we'll do differently for you that nobody else will think of."
Some agencies use confidence ratings instead: High Confidence, Some Confidence, Low Confidence. Confidence is driven by specificity and proof. Evaluators gain confidence from named personnel with verified qualifications, specific past performance on similar work with measurable outcomes, detailed methodologies with clear rationale (not just "we will use industry best practices"), and concrete risk mitigation plans tied to the actual risks of this specific contract.
How evaluation criteria should drive your bid/no-bid decision
Most teams treat evaluation criteria as a proposal-writing input. Smart teams treat them as a pursuit-decision input. Here's the difference:
If past performance is the highest-weighted factor and you don't have relevant references meeting the dollar and recency thresholds, your Pwin is low. It doesn't matter how good your technical approach is. The evaluation criteria just told you that past performance will carry more weight than technical innovation — and you're weak on the dominant factor.
If the evaluation is LPTA and your fully burdened rates are above market for this NAICS, you're going to lose on price. The evaluation criteria just told you that technical excellence has zero marginal value — and you can't win the factor that matters.
Conversely, if the evaluation heavily weights technical approach and you have a proven, differentiated methodology with measurable results on similar work, the criteria are telling you this opportunity plays to your greatest strength. That's a strong "go" signal.
Reading Section M before deciding to bid isn't optional — it's the single most important triage step after checking basic eligibility. For a complete framework on structuring your bid/no-bid decision around evaluation criteria and other factors, see: The Bid/No-Bid Mistakes That Are Quietly Killing Your Win Rate.
The extraction problem: why most teams misread Section M
Evaluation criteria sound simple when described in a guide like this. In practice, they're buried in dense solicitation language that rewards careful reading and punishes skimming. A typical Section M might be 3-5 pages of interleaved factors, subfactors, relative importance statements, rating definitions, and cross-references to Section L. A busy capture manager scanning it in 10 minutes can easily miss that "Factor 2 is slightly less important than Factor 1" or that past performance is evaluated on a pass/fail basis rather than an adjectival scale.
These details matter. A team that thinks past performance is adjectivally rated will spend hours crafting compelling narratives. A team that knows it's pass/fail will confirm they meet the threshold and move on — investing those hours in the technical approach instead.
RFP Snapshot extracts evaluation criteria automatically from any federal solicitation — methodology, factor ranking, relative importance, and key evaluation notes — and presents them in a structured format alongside all other triage data. The goal isn't to replace reading Section M (you should always read it yourself for the opportunities you pursue). The goal is to give you the evaluation picture in 3 minutes so you can make a smarter pursuit decision before investing the time in a deep read.