You just got pinged about a new solicitation. It's 87 pages. It dropped 30 minutes ago. Someone on your team needs to figure out if it's worth pursuing before the pipeline review tomorrow morning. Sound familiar?
Most BD teams don't have a broken evaluation process. They have no process at all. Someone reads the RFP — maybe thoroughly, maybe skimming — writes up whatever they think is important, and presents it to leadership in whatever format feels right that day. The result: inconsistent triage, missed deal-breakers buried on page 63, and go/no-go decisions based on vibes instead of data.
This guide is the process your team should be following. Not a theory exercise — a practical, step-by-step sequence that an experienced capture manager uses to rip through a federal solicitation and extract exactly what leadership needs to make a call. If you're training a junior analyst, hand them this page.
Before you read a word: know what you're looking at
The single most common waste of time in BD is spending 3 hours reading an RFI like it's a final RFP. Before you invest any effort, spend 30 seconds identifying the document type.
An RFI or Sources Sought is market research — the agency is asking questions, not buying anything. Your response shapes the future RFP, but there's no contract to win today. A 20-minute review is appropriate. A Draft RFP shows you what's coming and gives you time to position, but the requirements will change. Review the evaluation criteria and scope, but don't start writing a proposal. A Final RFP is the real thing. This gets the full treatment.
Matching your effort to the document type is the first efficiency gain. We've seen teams spend 6 hours producing a detailed summary of a Sources Sought notice that resulted in zero actionable decisions. Don't be that team.
Step 1: Pull the 10 data points that kill 80% of opportunities
Most solicitations fail the triage test on basic eligibility. Before you read the scope, the evaluation criteria, or the proposal instructions, extract these 10 fields — because any one of them can produce an instant no-go that saves your team hours:
- Set-aside type — Are you eligible? If it's 8(a) and you're not 8(a), stop.
- NAICS code and size standard — Are you small under this code? Check your SAM.gov registration.
- Contract vehicle — Does this require OASIS+, GSA MAS, SeaPort-NxG, or another vehicle you hold?
- Clearance requirements — Facility clearance level and individual clearances. If you don't have the FCL, this is likely a no-go.
- Agency and contracting office — Do you have a customer relationship? An unknown agency is winnable but harder.
- Estimated value — Is this in your revenue sweet spot? A $200K task order doesn't justify the same pursuit investment as a $20M IDIQ.
- Period of performance — Total duration including options. Longer periods mean more revenue but also more risk.
- Place of performance — Can you staff it? Onsite in San Diego when your team is in Northern Virginia is a staffing problem.
- Contract type — FFP, T&M, cost-plus? Your rate structure and risk tolerance vary by type.
- Response deadline — Do you have enough time to write a competitive proposal?
Here's the problem every BD team runs into: these 10 data points are scattered across 6 different sections of the solicitation. The NAICS is in Section K or the cover page. The clearance is in Section H or the DD-254. The set-aside might be in the synopsis, the cover page, or buried in an amendment. A junior analyst who doesn't know where to look can spend an hour hunting for information that a senior capture manager finds in 5 minutes.
This is the exact extraction problem that RFP Snapshot automates — upload the solicitation documents and get all 10 fields (plus 10 more) in a standardized summary in under 3 minutes. But whether you use a tool or a manual checklist, the principle is the same: extract these fields first, before you read anything else.
Step 2: Read Section M before Section L
This is the habit that separates experienced capture managers from everyone else. Most people read the solicitation front-to-back: cover page, Section B, Section C (the SOW), Section L (proposal instructions), then finally Section M (evaluation criteria). By the time they reach Section M, they've already formed opinions about what matters based on the SOW — and those opinions are often wrong.
Section M tells you what the government is actually scoring. It tells you whether technical approach matters more than price, whether past performance is a go/no-go gate, whether staffing plan is evaluated as a subfactor or not at all. If you read the SOW first, you'll spend mental energy on every requirement equally. If you read Section M first, you'll know which requirements carry evaluation weight and which are just contract administration.
For a deep dive on evaluation methodologies and what evaluators are actually scoring, see our guide: How to Read Section M Like a Winning Capture Manager.
Step 3: Assess key personnel — the silent deal-breaker
We've watched teams spend 40 hours writing a proposal only to realize on the Friday before submission that they can't find a Program Manager with the required PMP, Secret clearance, and 10 years of DHS experience. Key personnel are the most common "hidden no-go" in federal proposals because the requirements are scattered across multiple sections and the full picture only emerges when you cross-reference all of them.
Section L tells you how many resumes to submit. Section M tells you how key personnel are evaluated. Section C defines the actual qualifications (experience, education, certifications). Section H may restrict substitutions or require the person to be a prime contractor employee.
For each key personnel position, build a quick profile: title, experience requirement, education, clearance, certifications, location, and whether they must be a prime employee. Then answer one question: do we have this person, or can we find them before the proposal is due? If the answer is "no" for any required position, that's a critical factor in your go/no-go discussion — not something to discover the week of submission.
Step 4: Scope check — alignment, not comprehension
At the triage stage, you're not trying to understand every detail of the Performance Work Statement. You're trying to answer one question: does the scope of this work match what our company does?
Scan the PWS/SOW headings and functional area descriptions. Most are organized by service area — IT Operations, Cybersecurity, Help Desk, Network Engineering, Facilities Maintenance, Administrative Support. In 2-3 minutes you can determine whether the work profile matches your core capabilities, where you'd need to team or sub, and whether there are any functional areas you simply can't deliver.
The detailed SOW analysis happens after the go/no-go decision, during proposal development. At triage, you're just checking fit.
Step 5: Past performance gut-check
Every federal proposal has a past performance component. The solicitation specifies how many references you need, what makes them "relevant" (dollar value, recency, scope similarity), and whether subcontractor or affiliate experience counts.
Your triage assessment is simple: do you have the right number of references that meet the relevancy thresholds? If the RFP requires three references of $5M+ annual value in IT services within the last five years, and you have two strong ones and a third that's borderline, that's a "conditional go" — not a "definitely go." If you have zero that meet the threshold, that's a serious competitive weakness worth acknowledging before you invest 200 hours in a proposal.
Step 6: Timeline sanity check
Work backward from the due date. If the proposal is due in 14 days and you haven't started teaming conversations, don't have key personnel identified, and need to write a 50-page technical volume from scratch — that's not a timeline problem, that's a no-go dressed up as a time crunch. Proposal quality collapses when teams try to compress 6 weeks of work into 2 weeks. The result is a compliant but mediocre submission that scores "Acceptable" when you needed "Outstanding."
A realistic timeline assessment considers: writing time for each volume, internal review cycles (at minimum a Pink Team and a Red Team), graphics and production, pricing development, and the inevitable amendments that will change requirements mid-effort.
Step 7: Present the data, make the call
Everything above should produce a one-page summary — a structured document that gives your leadership team every data point they need for a go/no-go decision without requiring them to read the RFP themselves. When the data is standardized, the decision meeting takes 5 minutes per opportunity instead of 30.
For a detailed framework on structuring the actual go/no-go decision, see: The Bid/No-Bid Mistakes That Are Quietly Killing Your Win Rate.
The real bottleneck isn't judgment — it's extraction
After working with dozens of GovCon BD teams, we've noticed something consistent: the quality of go/no-go decisions isn't the problem. Leadership teams have good judgment. They know their company, they know their customers, they know what they can win. The problem is that the data they need to exercise that judgment arrives too slowly, too inconsistently, and too late.
A junior analyst spending 3 hours per solicitation means your leadership is making decisions based on whoever happened to read this one and whatever they thought was important. A standardized summary that captures the same 20+ fields from every solicitation, in the same format, in under 3 minutes means your leadership is making decisions based on complete, comparable data across your entire pipeline.
That's what we built RFP Snapshot to do. But the process works regardless of the tool — the key is consistency. If you take nothing else from this guide, take this: build a checklist, train your team to use it the same way every time, and never let another go/no-go decision happen without standardized data on the table.