Go/No-Go Scoring
Comprehensive decision scoring methodology with four outcomes, configurable weights, decision ring visualization, and action recommendations
Go/No-Go Scoring
The Go/No-Go Scoring system synthesizes multiple decision factors into a single composite score with clear action recommendations. This structured approach replaces subjective decision-making with data-driven methodology while preserving room for strategic judgment.
Overview
Every opportunity receives a Decision Score from 0-100, calculated by weighting four key factors:
- Win Probability (40% default weight): AI-calculated likelihood of success
- Strategic Fit (30% default weight): Alignment with company priorities
- Capability Match (20% default weight): Strength of requirement matches
- Competitive Position (10% default weight): Market standing and differentiators
The composite score maps to one of four decision outcomes, each with specific pursuit recommendations and resource allocation guidance.
The Four Decision Outcomes
1. Strong Go (80-100 points)
Recommendation: Full pursuit with top resources
Characteristics:
- High win probability (typically 65%+)
- Strong strategic alignment
- Excellent capability match
- Favorable competitive position
Typical Action Plan:
- Assign A-team personnel (best proposal writers, technical leads)
- Allocate full budget for proposal development
- Engage executive sponsors for customer relationships
- Invest in competitive intelligence and win strategy
- Consider teaming only for strategic reasons (not capability gaps)
- Plan for robust technical approach and innovative solutions
- Budget for customer engagement (pre-proposal meetings, demos, site visits)
Resource Commitment: 100% of normal proposal budget and timeline
Success Metrics: 60-70% win rate expected on Strong Go opportunities
Success
Strong Go opportunities are your bread and butter. These should represent 20-30% of your pipeline and drive the majority of your wins. If you're not achieving 60%+ win rate on Strong Go decisions, recalibrate your scoring model.
Example Strong Go Opportunity:
Opportunity: Cloud Infrastructure Modernization ($2.5M, 3-year)
Win Probability: 72% (past performance with customer, strong technology match)
Strategic Fit: 88% (target market expansion, strategic account)
Capability Match: 92% (exceeds on cloud, DevOps, security; meets all others)
Competitive Position: 78% (known differentiator in automation, weak incumbency)
Decision Score: (72 × 0.4) + (88 × 0.3) + (92 × 0.2) + (78 × 0.1)
= 28.8 + 26.4 + 18.4 + 7.8
= 81.4 → STRONG GO
Action: Full pursuit, A-team assigned, executive engagement planned
2. Qualified Go (60-79 points)
Recommendation: Conditional pursuit with risk mitigation
Characteristics:
- Moderate win probability (typically 40-65%)
- Good strategic alignment OR strong capability match
- Some gaps requiring mitigation
- Competitive pressure present
Typical Action Plan:
- Assign mixed team (experienced leads, developing staff for growth)
- Allocate 60-80% of full proposal budget
- Identify and mitigate specific risks:
- Capability gaps: Teaming, subcontracting, or key hires
- Competitive threats: Differentiation strategy, pricing approach
- Customer relationship: Proactive engagement to clarify needs
- Strategic uncertainty: Define success criteria and exit points
- Set clear go/no-go checkpoint mid-proposal
- Plan for efficient proposal process (leverage templates, past proposals)
- Focus on strengths; don't over-invest in weak areas
Resource Commitment: 60-80% of normal proposal budget and timeline
Success Metrics: 35-45% win rate expected on Qualified Go opportunities
Note
Qualified Go is the "pursue with eyes open" category. You're not the favorite, but you have a reasonable shot. Focus on efficiency and clear differentiation. Be willing to no-bid mid-proposal if risks materialize or a better opportunity emerges.
Conditional Pursuit Strategies:
| Condition | Mitigation Strategy |
|---|---|
| Moderate win probability (45-55%) | Develop clear win themes, engage customer to validate approach |
| Capability gap in rated criteria | Teaming partner or subcontractor with proven capability |
| Unknown competitors | Invest in competitive intelligence, price competitively |
| Tight timeline | Leverage existing content, limit custom development |
| Strategic uncertainty | Define clear success metrics; pursue if learning value justifies cost |
Example Qualified Go Opportunity:
Opportunity: Data Analytics Platform ($800K, 18-month)
Win Probability: 52% (moderate past performance, new customer relationship)
Strategic Fit: 75% (good market alignment, moderate revenue impact)
Capability Match: 68% (strong on analytics, gaps in specific ML algorithms)
Competitive Position: 55% (competitor has relevant case study)
Decision Score: (52 × 0.4) + (75 × 0.3) + (68 × 0.2) + (55 × 0.1)
= 20.8 + 22.5 + 13.6 + 5.5
= 62.4 → QUALIFIED GO
Action: Pursue with B-team, identify ML teaming partner, competitive pricing strategy
Risk Mitigation: Engage customer to clarify ML requirements, validate our approach
Exit Criteria: No-bid if teaming partner can't commit or customer feedback is negative
3. Qualified No-Go (40-59 points)
Recommendation: Lean toward no-bid; pursue only with strategic override
Characteristics:
- Low-to-moderate win probability (typically 25-40%)
- Limited strategic alignment
- Significant capability gaps or competitive disadvantages
- High opportunity cost (better pursuits available)
Typical Action Plan:
- Default to no-bid decision
- Consider pursuit only if:
- Strategic override: Executive decision to pursue for relationship or market entry
- Learning opportunity: New market/technology where loss is acceptable investment
- Low-cost entry: Minimal proposal effort required (e.g., simplified acquisition)
- No better alternatives: Capacity available and pipeline is thin
- If pursuing with override:
- Assign C-team or junior staff for development opportunity
- Allocate minimal budget (< 40% of normal proposal cost)
- Use as training exercise; don't over-invest
- Set low expectations; plan for loss
- Extract maximum learning value from process
Resource Commitment: < 40% of normal proposal budget (if pursued at all)
Success Metrics: 15-25% win rate on Qualified No-Go pursuits
Warning
Qualified No-Go opportunities are resource traps. The default action should be no-bid. If you pursue more than 10-15% of Qualified No-Go opportunities, you're likely wasting resources that could drive wins in higher-scored categories.
Strategic Override Justifications:
| Override Reason | Validation Criteria |
|---|---|
| Strategic account development | C-level commitment to relationship investment; multi-year account plan |
| Market entry | Documented market strategy; willingness to absorb loss as cost of entry |
| Capability demonstration | Specific capability you need to prove; alternative demo pathways exhausted |
| Competitive blocking | Preventing competitor win has strategic value; cost of pursuit is minimal |
| Partnership requirement | Teaming partner or customer requires your participation despite low win probability |
Example Qualified No-Go Opportunity:
Opportunity: Legacy System Maintenance ($400K, 12-month)
Win Probability: 30% (no past performance with customer, entrenched incumbent)
Strategic Fit: 45% (low strategic value, commodity work, minimal capability development)
Capability Match: 58% (can do the work, but not differentiated)
Competitive Position: 35% (incumbent has 8-year history, strong customer relationships)
Decision Score: (30 × 0.4) + (45 × 0.3) + (58 × 0.2) + (35 × 0.1)
= 12 + 13.5 + 11.6 + 3.5
= 40.6 → QUALIFIED NO-GO
Action: NO-BID recommended
Rationale: Low win probability against strong incumbent, limited strategic value, better opportunities in pipeline
Potential Override: Only if customer proactively requests our participation and commits to fair evaluation
4. Strong No-Go (0-39 points)
Recommendation: Clear no-bid decision
Characteristics:
- Very low win probability (typically < 25%)
- Poor strategic alignment
- Critical capability gaps (especially mandatory requirements)
- Severe competitive disadvantages (strong incumbent, disqualifying factors)
Typical Action Plan:
- No-bid decision - do not pursue
- Document reason for decision (learning for future opportunities)
- Optional: Submit capability statement or white paper (no-cost marketing)
- Optional: Attend pre-proposal conference for market intelligence (if minimal cost)
- Track for future re-compete if strategic value is high
- Focus resources on higher-probability opportunities
Resource Commitment: Zero proposal budget; no-bid
Success Metrics: Should not pursue; win rate N/A
Warning
Strong No-Go opportunities should never be pursued. If you find yourself overriding Strong No-Go decisions, your scoring model is misconfigured or your decision discipline has broken down. Every Strong No-Go pursuit wastes resources and reduces win rates.
Common Strong No-Go Scenarios:
| Scenario | Characteristic | Action |
|---|---|---|
| Missing mandatory requirement | Cannot meet disqualifying criterion | No-bid; document gap for future capability development |
| Severely under-qualified | Capability match < 40% | No-bid; wrong opportunity for your business |
| Wired procurement | Clear evidence of predetermined outcome | No-bid; file protest only if egregious |
| Unrealistic pricing | Customer budget far below market rates | No-bid; educate customer via white paper if strategic |
| Capacity constraints | Cannot deliver even if won | No-bid; subcontractor role only if customer requests |
Example Strong No-Go Opportunity:
Opportunity: Advanced Satellite Communications System ($15M, 5-year)
Win Probability: 18% (no satellite experience, no relevant past performance)
Strategic Fit: 35% (interesting market but massive capability gap)
Capability Match: 22% (missing 3 of 5 mandatory requirements)
Competitive Position: 25% (major aerospace primes are incumbents)
Decision Score: (18 × 0.4) + (35 × 0.3) + (22 × 0.2) + (25 × 0.1)
= 7.2 + 10.5 + 4.4 + 2.5
= 24.6 → STRONG NO-GO
Action: NO-BID
Rationale: Missing critical mandatory requirements (satellite communications protocols, space-rated hardware experience, security clearances). Aerospace primes have overwhelming competitive advantage. Pursuit would be zero-probability waste of resources.
Alternative: Track for future; consider acquiring satellite communications firm if market entry is strategic priority.
Scoring Formula and Weights
Default Scoring Formula
Decision Score = (Win Probability × 40%) +
(Strategic Fit × 30%) +
(Capability Match × 20%) +
(Competitive Position × 10%)
Each factor is scored 0-100, then weighted and summed to produce the composite Decision Score.
Configurable Weights
Organizations can adjust weights to reflect decision philosophy:
Tip
Weight calibration should be based on historical data. After 20-30 recorded bid outcomes, analyze which factors best predicted wins vs. losses. Adjust weights to align with your organization's actual decision patterns and success drivers.
Customizing Decision Thresholds
Organizations can also adjust the score ranges for each outcome:
Default Thresholds:
{
strongGo: { min: 80, max: 100 },
qualifiedGo: { min: 60, max: 79 },
qualifiedNoGo: { min: 40, max: 59 },
strongNoGo: { min: 0, max: 39 }
}
Conservative Threshold Adjustment (higher bar for pursuit):
{
strongGo: { min: 85, max: 100 }, // Narrower Strong Go band
qualifiedGo: { min: 70, max: 84 }, // Higher floor for conditional pursuit
qualifiedNoGo: { min: 50, max: 69 }, // Expanded no-go zone
strongNoGo: { min: 0, max: 49 }
}
Aggressive Threshold Adjustment (lower bar for pursuit):
{
strongGo: { min: 75, max: 100 }, // Broader Strong Go band
qualifiedGo: { min: 55, max: 74 }, // Lower floor for conditional pursuit
qualifiedNoGo: { min: 35, max: 54 }, // Narrower no-go zone
strongNoGo: { min: 0, max: 34 }
}
Decision Ring Visualization
The system presents the Decision Score in a visual Decision Ring that provides at-a-glance understanding:
Ring Components
Outer Ring: Shows the 0-100 score scale with color-coded zones
- Strong Go: Green zone (80-100)
- Qualified Go: Yellow zone (60-79)
- Qualified No-Go: Orange zone (40-59)
- Strong No-Go: Red zone (0-39)
Score Indicator: Needle or marker shows exact composite score position
Center Display:
- Decision Score: Large number (e.g., "73")
- Outcome Label: Text label ("Qualified Go")
- Confidence Interval: ± range based on data quality (e.g., "±8")
Factor Breakdown: Inner segments or side panel showing component scores
- Win Probability: 68% (contributes 27.2 points at 40% weight)
- Strategic Fit: 82% (contributes 24.6 points at 30% weight)
- Capability Match: 75% (contributes 15.0 points at 20% weight)
- Competitive Position: 60% (contributes 6.0 points at 10% weight)
- Total: 72.8 → Qualified Go
Interpreting the Ring
Note
The Decision Ring is a communication tool, designed for rapid executive review. It summarizes hours of analysis into a single visual that executives can interpret in seconds. Always provide supporting detail for stakeholders who want to drill into the factors.
Strong Signal Patterns:
- High confidence Strong Go: Score 85+, all factors green
- Borderline Strong Go: Score 80-82, verify assumptions before full resource commitment
- High Qualified Go: Score 75-79, close to Strong Go; focus on mitigating gaps to push over threshold
- Low Qualified Go: Score 60-65, marginal pursuit; needs clear risk mitigation plan
- High No-Go: Score 55-59, near decision boundary; validate that override isn't appropriate
Mixed Signal Patterns (investigate before finalizing decision):
- High strategic fit, low win probability: Great opportunity, but can we actually win?
- High capability match, low strategic fit: Easy win, but should we care?
- High win probability, low capability match: Something is wrong; validate capability assessment
- High score with low confidence interval: Missing data; gather more information before decision
Factor Scoring Details
Win Probability (0-100%)
Source: AI-calculated based on multiple signals (see Win Probability)
Key Inputs:
- Past performance similarity
- Customer relationship strength
- Capability match quality
- Competitive landscape
- Procurement characteristics
Confidence Level: System provides confidence interval (e.g., 60% ±15%)
Contribution to Decision Score (at 40% weight):
- 80% win probability → 32 points
- 60% win probability → 24 points
- 40% win probability → 16 points
- 20% win probability → 8 points
Strategic Fit (0-100 points)
Source: Weighted scoring against configured strategic priorities (see Strategic Fit)
Calculation:
Strategic Fit = Σ(Priority Score × Priority Weight)
Where each Priority Score is 0-100 assessment of how well opportunity advances that priority
Example:
Revenue Growth (25% weight): Scored 90 → 22.5 points
Market Expansion (20% weight): Scored 80 → 16.0 points
Capability Development (15% weight): Scored 60 → 9.0 points
Strategic Accounts (20% weight): Scored 70 → 14.0 points
Financial Returns (20% weight): Scored 85 → 17.0 points
Strategic Fit Score = 78.5
Contribution to Decision Score (at 30% weight):
- 90 strategic fit → 27 points
- 70 strategic fit → 21 points
- 50 strategic fit → 15 points
- 30 strategic fit → 9 points
Capability Match (0-100 points)
Source: Aggregated from requirement-level capability assessments
Calculation:
Capability Match = (Mandatory Requirements Score × 0.6) + (Rated Criteria Score × 0.4)
Where:
- Mandatory Requirements Score = % of mandatory requirements where capability ≥ "Meets"
- Rated Criteria Score = Weighted average of all rated requirement capability scores
Capability Scores by Match Level:
- Exceeds: 100 points (demonstrable excellence)
- Meets: 75 points (fully capable)
- Partial: 50 points (capable with gaps)
- Cannot Meet: 0 points (missing capability)
Critical Rule: Any mandatory requirement scored "Cannot Meet" reduces Capability Match to < 40, triggering Strong No-Go unless mitigation is documented.
Contribution to Decision Score (at 20% weight):
- 90 capability match → 18 points
- 70 capability match → 14 points
- 50 capability match → 10 points
- 30 capability match → 6 points
Competitive Position (0-100 points)
Source: Manual assessment of market position and competitive factors (see Competitive Intelligence)
Scoring Rubric:
| Score Range | Market Position | Characteristics |
|---|---|---|
| 80-100 | Leader | You are incumbent OR clear market leader with demonstrable advantages |
| 60-79 | Challenger | Strong competitive position; you vs. 1-2 other credible competitors |
| 40-59 | Niche Player | You have specific differentiator but face strong generalist competitors |
| 20-39 | Underdog | Significant competitive disadvantages; competitor is incumbent or clearly favored |
| 0-19 | Non-Competitive | You are severely outmatched; competitor has disqualifying advantage |
Adjustment Factors:
- +10 points: You have unique differentiator validated by customer
- +5 points: Competitor has notable weakness you can exploit
- -10 points: Competitor is incumbent with strong performance record
- -5 points: Competitor has recent win in this exact domain with this customer
Contribution to Decision Score (at 10% weight):
- 80 competitive position → 8 points
- 60 competitive position → 6 points
- 40 competitive position → 4 points
- 20 competitive position → 2 points
Decision Workflow
Advanced Scoring Scenarios
Handling Borderline Scores
Scores within ±3 points of a threshold require additional scrutiny:
Example: Score of 78 (Qualified Go, but 2 points below Strong Go threshold of 80)
Investigation Questions:
- Which factor is holding back the score? (e.g., win probability is 55%, pulling down composite)
- Can we improve that factor? (e.g., customer engagement to increase win probability)
- Is the opportunity worth pushing into Strong Go? (resource availability, strategic importance)
- What's the confidence interval? (78 ±10 could actually be Strong Go territory)
Decision Options:
- Accept Qualified Go status and pursue with appropriate resource constraints
- Invest to improve weak factor (customer engagement, teaming, competitive intel) and re-score
- Treat as "high Qualified Go" with slightly elevated resources
- If very close to threshold and high confidence, round up and treat as low Strong Go
Portfolio-Level Scoring
Individual opportunity scores should be considered in portfolio context:
Resource Constraint Scenario:
- You have 3 active Strong Go pursuits consuming all proposal resources
- New opportunity scores Qualified Go (72 points)
- Decision: No-bid the Qualified Go to maintain quality on Strong Go pursuits
Pipeline Gap Scenario:
- You have only 1 active pursuit in pipeline
- Opportunity scores Qualified Go (64 points)
- Decision: Pursue as resource is available, even though it's borderline
Strategic Override Scenario:
- Opportunity scores Qualified No-Go (52 points)
- BUT: It's with your #1 strategic account and they've specifically requested your bid
- Decision: Override to Qualified Go, but with clear resource constraints and learning focus
Warning
Don't let portfolio pressure distort scoring. Score opportunities objectively based on their merits, then make portfolio-level decisions about which to pursue. Inflating scores to justify pursuit undermines the decision framework's integrity.
Sensitivity Analysis
For high-value or strategic opportunities, perform sensitivity analysis:
What-If Scenarios:
- If we team with Partner X, capability match improves from 68% to 85%
- New Decision Score: 72 → 78 (still Qualified Go, but stronger)
- If we engage customer and validate approach, win probability might increase from 55% to 68%
- New Decision Score: 64 → 69 (remains Qualified Go, but higher confidence)
- If competitor Y decides not to bid, competitive position improves from 50 to 75
- New Decision Score: 66 → 68 (modest improvement)
Sensitivity Insights:
- Which factor has the most leverage? (Focus mitigation efforts there)
- Can we realistically improve the score enough to change outcome tier?
- Is the improvement worth the investment? (e.g., is customer engagement likely to actually increase win probability?)
Calibration and Continuous Improvement
Quarterly Score Calibration
Every quarter, review decision score accuracy:
Learning from Overrides
Executive overrides provide valuable calibration data:
Track Override Patterns:
Override: Qualified No-Go → Go (Strategic Account Justification)
Original Score: 48 (Win Prob 35%, Strategic Fit 55%, Capability 60%, Competitive 40%)
Override Rationale: "CEO relationship with customer CIO; strategic account development priority"
Outcome: LOSS (competitor scored higher on technical; price was non-competitive)
Learning: Strategic relationship wasn't sufficient to overcome technical gaps; need better capability match for strategic overrides
Override Analysis Questions:
- What % of overrides resulted in wins vs. losses?
- Were override justifications validated by outcomes?
- Do certain executives have better override judgment than others?
- Are there systematic override patterns that should be incorporated into scoring model?
Model Refinement:
- If "strategic account" overrides win > 50%, consider increasing strategic fit weight
- If "CEO relationship" overrides win < 30%, document that relationships alone don't overcome technical/competitive gaps
- If certain override types consistently win, codify them into automated scoring
Integration with Approval Workflows
Decision scores can trigger approval workflows:
Approval Authority by Score and Value
| Decision Score | Opp. Value | Approval Required | Review Board |
|---|---|---|---|
| Strong Go | Any | BD Director | Optional |
| Strong Go | > $2M | VP or above | Executive Committee |
| Qualified Go | < $500K | BD Director | None |
| Qualified Go | $500K-$2M | VP | Finance + Technical review |
| Qualified Go | > $2M | VP or CEO | Executive Committee + Board notification |
| Qualified No-Go | Any | VP + documented override rationale | Required |
| Strong No-Go | Any | CEO-level override only | Required + documented justification |
Automated Workflow Triggers
Best Practices
Maintain Scoring Objectivity
Tip
Score opportunities based on their merits before considering resource constraints or portfolio factors. Objective scoring creates a common language and prevents justification bias (distorting scores to rationalize a predetermined decision).
Do:
- Score each factor independently based on evidence
- Document assumptions and data sources for each score
- Use consistent rubrics and definitions across opportunities
- Review scores with cross-functional team for bias checks
- Calibrate scores quarterly based on outcome data
Don't:
- Inflate scores to justify pursuing an opportunity you "like"
- Deflate scores to justify no-bidding an opportunity you want to avoid
- Let resource availability influence scoring (address in decision, not score)
- Score based on "gut feel" without documenting rationale
- Cherry-pick data to support desired score
Set Clear Decision Cadence
Establish standard timelines for each decision phase:
| Phase | Timeline | Deliverable |
|---|---|---|
| Initial Screening | Within 24 hours of opportunity discovery | Go/No-Go for detailed analysis |
| Detailed Analysis | 3-5 business days | Factor scores and decision score |
| Team Review | 1-2 business days | Validated scores and recommendation |
| Approval | 1-2 business days | Formal go/no-go decision |
| Resource Allocation | Immediate upon go decision | Team assignments and budget |
Warning
Avoid "decide to decide later". Make clear go/no-go calls within established timelines. Indecision consumes resources without producing outcomes. If you need more information, set a specific date for re-evaluation and gather that information actively.
Balance Model and Judgment
The scoring model is a tool, not a mandate:
Trust the Model When:
- Strong Go or Strong No-Go (clear signals)
- Data quality is high (all factors well-validated)
- Historical calibration is strong (model has proven accuracy)
- Decision is consistent with past similar opportunities
Apply Judgment When:
- Borderline scores (within ±5 points of threshold)
- Missing or low-quality data (wide confidence intervals)
- Novel opportunity types (model has limited training data)
- Strategic factors the model doesn't capture
Document Judgment:
- When overriding model recommendation, document rationale
- Track override outcomes to validate judgment accuracy
- Review overrides quarterly to identify patterns for model refinement
FAQ
Related Documentation:
Related Articles
Was this page helpful?