C
Docs

Bid Performance Widgets

Master bid performance tracking with win rate analysis, pipeline management, pricing optimization, and team efficiency metrics

Updated 2026-03-3021 min read

Bid Performance Widgets

Bid Performance widgets track your team's effectiveness at winning contracts and managing the pursuit pipeline. These widgets transform raw bid data into actionable insights that improve win rates, optimize pricing, identify bottlenecks, and maximize ROI on proposal investments.

This guide covers each Bid Performance widget in detail, including advanced configuration strategies, interpretation frameworks, and real-world optimization playbooks based on procurement best practices.

The Performance Management Framework

Effective bid performance management follows a continuous improvement cycle:

1. Measure → Track win rate, pricing accuracy, effort investment, pipeline health 2. Analyze → Identify patterns (what drives wins vs. losses?) 3. Optimize → Adjust strategy based on insights (improve pricing, qualification, teaming) 4. Repeat → Continuously measure impact of changes

Bid Performance widgets power the "Measure" step, providing reliable data for the entire cycle.

Note

Data quality matters: Performance widgets are only as good as your bid data. Ensure you consistently record all bids (wins AND losses), actual submitted prices, and detailed debriefs. Incomplete data leads to inaccurate insights and poor decisions.


Win Rate Analysis Widget

Overview

The Win Rate Analysis widget is your primary performance scorecard. It calculates and visualizes your win/loss ratios across multiple dimensions, answering: "How effective are we at winning contracts, and where should we improve?"

Primary use case: Performance tracking, quarterly reviews, capability assessment

Typical position: Top-left of dashboard (primary KPI for most organizations)

Refresh rate: 5 minutes (semi-real-time for active tracking)

What It Displays

The Win Rate Analysis widget shows:

1. Overall Win Rate

Large percentage display with trend indicator:

Win Rate: 32%
48 wins, 102 losses, 18 pending
▲ +5% vs. previous period

2. Win Rate Trend

Line chart showing win rate over time (weekly or monthly):

  • X-axis: Time period (last 12 months)
  • Y-axis: Win rate percentage
  • Visual: Line chart with trend line (moving average)
  • Annotations: Mark significant events (e.g., "Hired proposal manager", "New pricing strategy")

3. Breakdown Tables

Detailed win rate by dimension (configurable):

CategoryBidsWinsLossesWin Rate
IT Services40182245%
Professional Services43123128%
Construction2381535%

Or by agency, value range, team member, complexity, etc.

Configuration Options

OptionValuesDefaultDescription
Time range90d, 6m, 1y, 2y, All time1yPeriod for win rate calculation
Include pendingYes, No, WeightedNoHow to treat pending proposals
Breakdown byTime, Category, Agency, Team, Value range, ComplexityTimeDimension for detailed analysis
Value weightingEqual, By valueEqualTreat all bids equally or weight by dollar value
Minimum bids1, 5, 10, 205Minimum bids required to calculate rate (avoid statistical noise)
Comparison periodNone, Previous period, Previous yearPrevious periodBaseline for trend indicators
Trend calculationSimple, Moving average (7d/30d/90d)30-day moving averageMethod for smoothing trend line

Advanced settings:

  • Confidence intervals: Show statistical confidence bounds (useful for small sample sizes)
  • Exclude outliers: Remove mega-contracts (>$50M) that skew win rate
  • Stage-specific: Calculate win rate at specific stages (qualified-to-proposal, proposal-to-win)

Deep Dive: Value-Weighted Win Rate

Equal weighting (default):

Bids: 10
Wins: 3 (two $100K contracts, one $5M contract)
Losses: 7 (all $100K-$500K contracts)
Win Rate: 30% (3 wins ÷ 10 bids)

Value weighting:

Total bid value: $10M
Won value: $5.2M ($100K + $100K + $5M)
Lost value: $4.8M
Win Rate: 52% ($5.2M ÷ $10M)

Interpretation: You win 30% of bids by count but 52% by value. This means you're winning the large, high-value opportunities and losing smaller ones.

Strategic implication: Your capture process is optimized for large deals (strong past performance, detailed proposals, competitive pricing). Continue focusing on >$1M opportunities where you excel. Avoid pursuing <$500K unless they're strategic (e.g., foothold at new agency).

Use Case Examples

Example 1: Quarterly Performance Review

Scenario: CEO wants to understand Q1 2026 win rate performance vs. Q4 2025 and identify areas for improvement.

Dashboard setup:

  • Win Rate Analysis (8 cols, prominent position)
  • Time range: 6m (shows Q4 2025 and Q1 2026)
  • Breakdown by: Category
  • Value weighting: Equal (count-based for simplicity)
  • Comparison: Previous period (Q1 2026 vs. Q4 2025)

Workflow:

Outcome: Data-driven performance review with specific improvement targets and root cause understanding.


Example 2: Capability Assessment for Go/No-Go

Scenario: You're evaluating a $2.3M cybersecurity RFP and need to assess your historical cybersecurity win rate to inform go/no-go decision.

Dashboard setup:

  • Win Rate Analysis (4 cols, compact view)
  • Time range: All time (complete historical data)
  • Breakdown by: Custom filter → Cybersecurity subcategory
  • Minimum bids: 3 (need at least 3 data points for reliability)

Workflow:

  1. Filter Win Rate Analysis to cybersecurity opportunities
  2. Review cybersecurity win rate: 18% (3 wins, 14 losses, 0 pending)
  3. Compare to overall win rate: 32%

Interpretation: Your cybersecurity win rate (18%) is significantly below overall (32%). You're weak in cybersecurity.

Go/No-Go factors:

  • Against: Historical 18% win rate = low probability
  • For: High-value opportunity ($2.3M), strategic importance (establish cybersecurity track record)
  • Decision framework:
    • If you have differentiator (e.g., unique certification, strong teaming partner) → Go (attempt to overcome historical weakness)
    • If no differentiator → No-go (18% win probability not worth proposal investment)

Final decision: No-go unless you can identify specific competitive advantage for THIS opportunity that explains why this bid would be different from historical 18% win rate.


Example 3: Team Performance Benchmarking

Scenario: You have 4 proposal managers and want to identify top performers and development opportunities.

Dashboard setup:

  • Win Rate Analysis (12 cols, full width table view)
  • Time range: 1y
  • Breakdown by: Team member (proposal manager)
  • Minimum bids: 5 (each PM must have managed at least 5 bids)

Workflow:

PM NameBidsWinsLossesWin RateAvg Bid Value
Sarah158753%$3.2M
Mike28111739%$1.8M
Jennifer2251723%$2.1M
David1871139%$2.5M

Insights:

  1. Sarah is top performer (53% win rate, highest avg bid value)

    • Action: Study Sarah's approach (proposal process, pricing methodology, capture strategy). Document as best practices. Mentor Jennifer using Sarah's methods.
  2. Jennifer is struggling (23% win rate, well below team avg of 39%)

    • Action: Pair Jennifer with Sarah for next 3 bids (shadow and learn). If no improvement after 6 months, consider role change or additional training.
  3. Mike and David are solid (39% win rate each, aligned with team avg)

    • Action: Maintain performance, look for category specializations (do they have higher win rates in specific categories?)

Advanced analysis: Drill into Jennifer's 17 losses. Debrief data shows:

  • 8 losses due to pricing (47%)
  • 6 losses due to past performance (35%)
  • 3 losses due to technical approach (18%)

Specific development plan for Jennifer:

  • Pricing: Additional training on cost estimation and competitive pricing analysis (primary weakness)
  • Past performance: Work with BD to develop stronger reference relationships before pursuing bids
  • Technical: Shadow Sarah's technical writing process

Advanced Techniques

Statistical Significance Testing

For small sample sizes (<20 bids), win rate has high variance. Example:

  • 10 bids, 3 wins = 30% win rate
  • If next 2 bids are both wins → 42% win rate (+12%)
  • If next 2 bids are both losses → 25% win rate (-5%)

Solution: Enable confidence intervals in advanced settings.

Win Rate: 30% (95% CI: 15-48%)

This means true win rate is likely between 15-48% (wide range due to small sample). Don't over-interpret 30% as precise—collect more data before making major strategy changes.

Cohort Analysis

Track win rate for specific cohorts over time:

  • Cohort 1: Bids with technical volume >50 pages
  • Cohort 2: Bids with technical volume <50 pages

Example result:

  • Cohort 1: 45% win rate (detailed proposals)
  • Cohort 2: 28% win rate (concise proposals)

Insight: Your proposals perform better when they're comprehensive (>50 pages). This suggests evaluators value thoroughness. Adjust templates to encourage detailed technical approaches.

Marginal Win Rate

Track win rate improvement after specific changes:

Baseline (Jan-Jun 2025): 28% win rate

Change implemented (July 2025): New pricing methodology

Marginal win rate (Jul-Dec 2025): 36% win rate

Improvement: +8% attributable to new pricing methodology. This validates the change and justifies continued use.

Success

Pro tip: When you implement a process change (new pricing tool, proposal template, teaming strategy), create a cohort to track marginal impact. This provides data to justify (or discontinue) the change rather than relying on anecdotes.


Bid Statistics Widget

Overview

The Bid Statistics widget provides a high-level summary of bidding activity and pipeline health. It answers: "How many active pursuits do we have, and what's our pipeline value?"

Primary use case: Daily standup, quick pipeline check, team capacity assessment

Typical position: Top row of dashboard (alongside Win Rate Analysis)

Refresh rate: 30 seconds (real-time via WebSocket)

What It Displays

Dashboard cards layout:

┌─────────────────┬─────────────────┬─────────────────┐
│ Total Bids (90d)│  Pending Bids   │   Won Bids      │
│      87         │       18        │       24        │
│                 │  $47M pipeline  │  $12.5M value   │
└─────────────────┴─────────────────┴─────────────────┘

┌─────────────────┬─────────────────┬─────────────────┐
│   Lost Bids     │  Win Rate       │ Avg Proposal    │
│      45         │      28%        │  Time: 28 days  │
│                 │                 │                 │
└─────────────────┴─────────────────┴─────────────────┘

Conversion funnel (visual):

Qualified: 130 opportunities
     ↓ 67% conversion
Proposal: 87 bids
     ↓ 28% conversion
Won: 24 contracts

Configuration Options

OptionValuesDefaultDescription
Time range30d, 90d, 1y, All time90dPeriod for statistics
Status filterAll, Pending, Won, LostAllShow specific statuses only
CategoriesAll, Custom selectionAllFilter to categories
Team membersAll, Custom selectionAllFilter to specific PMs
Include pipeline valueYes, NoYesShow dollar value of pending bids
Show conversionsYes, NoYesDisplay qualified→proposal→win conversion rates

Interpretation Guide

Example statistics:

Total Bids (90d): 87
  - Pending: 18 ($47M pipeline)
  - Won: 24 ($12.5M value, 28% win rate)
  - Lost: 45 (52% loss rate)

Conversion Metrics:
  - Qualified to Proposal: 67% (87 proposals from 130 qualified ops)
  - Proposal to Win: 28% (24 wins from 87 proposals)
  - Qualified to Win: 18% (24 wins from 130 qualified ops)

Avg Time in Pipeline:
  - Qualified stage: 12 days
  - Proposal development: 28 days
  - Awaiting award decision: 45 days

Key metrics explained:

1. Pipeline value ($47M)

Sum of all pending bids. This is your potential revenue if you win every pending bid (unrealistic but useful for understanding upper bound).

Realistic pipeline forecast: Pipeline value × historical win rate = $47M × 28% = $13.2M expected wins from current pipeline.

2. Qualified-to-Proposal conversion (67%)

Of 130 qualified opportunities, you pursued 87 (67%). The remaining 33% (43 opportunities) were no-bid decisions after qualification.

High conversion (>70%): You're bidding almost everything you qualify. Either:

  • Excellent qualification process (only qualifying winnable opportunities), OR
  • Weak qualification (letting poor-fit opportunities through to proposal stage)

Low conversion (<50%): You're no-bidding many qualified opportunities. Either:

  • Very selective (good, if you're winning high percentage of what you pursue), OR
  • Resource constrained (can't pursue everything, leaving opportunities on table)

Optimal range: 50-70%. This suggests thoughtful qualification and selective pursuit.

3. Proposal-to-Win conversion (28%)

Of 87 proposals submitted, you won 24 (28%).

Industry benchmarks:

  • Government contracting: 25-40% (you: 28% ✓)
  • Commercial B2B: 35-50%
  • RFP responses (unsolicited): 10-20%

28% is within healthy range for government contracting. Below 25% suggests non-competitive proposals (pricing, technical, past performance). Above 40% suggests you're under-bidding (could pursue more opportunities).

4. Qualified-to-Win conversion (18%)

Of 130 qualified opportunities, you won 24 (18%). This is your true "end-to-end" success rate from initial qualification to contract award.

Benchmark: 15-25% is typical for government contracting.

18% = 1 in 5.5 qualified opportunities becomes a win. If you want to win 50 contracts this year, you need to qualify ~275 opportunities (50 ÷ 0.18 = 277).

Use Case Examples

Example 1: Daily Standup - Pipeline Health Check

Scenario: Team standup every morning to review pipeline and identify risks.

Dashboard setup:

  • Bid Statistics (6 cols, top-center of dashboard)
  • Time range: 30d (recent activity only)
  • Status filter: Pending (focus on active bids)
  • Show conversions: Yes

Workflow:

  1. Check pending count: 18 pending bids
  2. Review pipeline value: $47M
  3. Identify capacity: 18 pending bids ÷ 4 PMs = 4.5 bids per PM (manageable)
  4. Assess deadlines: Click "Pending" to see list → 3 bids due this week (flagged in red)
  5. Discuss risks: Any of the 3 imminent deadlines at risk? Need to add resources?

Decision: One bid (DOD IT Services, $3.2M) is 85% complete but technical volume needs peer review. Assign Jennifer to support final review (pulls from lower-priority bid with later deadline).

Time investment: 5 minutes daily

Outcome: Team aligned on priorities, risks identified early, resources reallocated proactively.


Example 2: Monthly Capacity Planning

Scenario: Planning next month's capacity to determine if you can accept new opportunities.

Dashboard setup:

  • Bid Statistics (4 cols)
  • Time range: 30d (look-ahead to next month's deadlines)
  • Status filter: Pending
  • Include pipeline value: Yes

Workflow:

  1. Current pending: 18 bids
  2. Expected new qualifications next month: 15 (based on historical avg)
  3. Expected awards next month: 5 (28% of 18 pending)
  4. Net change: +18 pending - 5 awards + 15 new = 28 pending by end of month
  5. Capacity check: 28 bids ÷ 4 PMs = 7 bids per PM (stretching capacity limit of 6-8 per PM)

Decision: At capacity. If a high-value opportunity (>$5M) emerges, no-bid some lower-value pending bids to make room. Do not add new pursuits <$1M in value.

Alternatively, bring in temp proposal writer to increase capacity for the month.


Example 3: Qualification Process Evaluation

Scenario: You suspect your qualification process is too loose (letting poor-fit opportunities through).

Dashboard setup:

  • Bid Statistics (6 cols)
  • Time range: 1y (sufficient data for analysis)
  • Show conversions: Yes

Workflow:

  1. Review Qualified-to-Proposal conversion: 67%
  2. Review Proposal-to-Win conversion: 28%

Analysis:

  • High Qualified-to-Proposal (67%): You're pursuing 2 out of 3 qualified opportunities. This is slightly high (suggests weak qualification).
  • Moderate Proposal-to-Win (28%): Win rate is acceptable but not exceptional.

Hypothesis: If qualification were tighter (70% → 60% Qualified-to-Proposal), you'd pursue fewer but better-fit opportunities, improving Proposal-to-Win (28% → 35%).

Test:

Implement stricter qualification criteria for next quarter:

  • Must have 3+ relevant past performance references (vs. previous 1+)
  • Must have 70%+ capability match (vs. previous 50%+)
  • Must have existing agency relationship OR strong teaming partner

Expected outcome:

  • Qualified-to-Proposal drops to 60% (more no-bids)
  • Proposal-to-Win improves to 35% (better-fit bids)
  • Overall Qualified-to-Win improves from 18% to 21% (18% → 21% = +17% improvement)

Measure: Track conversions for Q2 2026 and compare to Q1 2026 baseline.


Best Practices

1. Set pipeline value targets

Define minimum pipeline value to support revenue goals:

Example:

  • Annual revenue target: $50M
  • Historical win rate: 28%
  • Required pipeline: $50M ÷ 28% = $178M pipeline

At any given time, you need $178M ÷ 4 quarters = $45M pending pipeline to stay on track for $50M annual revenue.

If Bid Statistics shows $30M pipeline → Below target, need to qualify and pursue more opportunities.

2. Track conversion trends, not absolutes

67% Qualified-to-Proposal is not inherently good or bad—it depends on context.

More valuable: Is it increasing or decreasing?

  • 67% this quarter vs. 60% last quarter = trend toward pursuing more (why? desperation or increased capacity?)
  • 67% this quarter vs. 75% last quarter = trend toward more selectivity (why? tighter qualification or resource constraints?)

Track trends to understand strategic shifts.

3. Segment by value range

Conversion rates vary by opportunity size:

Value RangeQualified-to-ProposalProposal-to-Win
<$500K80%35%
$500K-$5M65%28%
>$5M45%18%

Insight: You pursue most small opportunities (80% qualified-to-proposal) and win 35% (above-average). You're selective on large opportunities (45% qualified-to-proposal) but win only 18% (below-average).

Implication: Your strength is small-to-medium deals. Avoid large deals (>$5M) unless you have strong differentiator—your 18% win rate suggests you're not competitive in that segment.

4. Use real-time refresh for active management

Set Bid Statistics to 30-second refresh (WebSocket) so it updates live during the work day.

When a bid status changes (proposal submitted → awaiting award, or awaiting award → won), the widget updates immediately. Team sees current state without manual refresh.

This is especially valuable for leadership dashboards—executives can monitor pipeline in real-time.

5. Combine with Pipeline Overview for funnel visualization

Bid Statistics provides numbers; Pipeline Overview provides visual funnel. Use both:

  • Bid Statistics: Quick numerical summary (standalone widget)
  • Pipeline Overview: Detailed funnel with stage-by-stage conversion rates (see below)

Place side-by-side for comprehensive pipeline view.


Pipeline Overview Widget

Overview

The Pipeline Overview widget visualizes opportunities moving through your bid process stages as a funnel. It answers: "Where are opportunities in our process, and where do they get stuck?"

Primary use case: Bottleneck identification, forecasting, process improvement

Typical position: Center of dashboard (medium-large for funnel visibility)

Refresh rate: 30 seconds (real-time)

What It Displays

Funnel visualization:

┌────────────────────────────────────────┐
│  Qualified: 130 opportunities          │
└────────────────────────────────────────┘
          ↓ 67% conversion (87 advance)
     ┌──────────────────────────────┐
     │  Proposal: 87 opportunities  │
     └──────────────────────────────┘
          ↓ 78% conversion (68 advance)
       ┌────────────────────────┐
       │ Submitted: 68 bids     │
       └────────────────────────┘
          ↓ 35% conversion (24 advance)
         ┌──────────────────┐
         │  Won: 24 contracts│
         └──────────────────┘

Additional metrics:

  • Average time in each stage
  • Drop-off reasons (why opportunities exit pipeline)
  • Value at each stage (total dollar value)
  • Forecast (based on conversion rates, how many of current qualified will become wins?)

Configuration Options

OptionValuesDefaultDescription
Time range30d, 90d, 1y90dPeriod for pipeline snapshot
StagesCustom stage namesDefault: Qualified, Proposal, Submitted, AwardedYour organization's bid stages
MetricCount, ValueCountShow number of opportunities or total dollar value
Status filterActive, AllActiveShow only active opportunities or include closed
Forecast modeNone, Conservative, AggressiveConservativeForecasting methodology

Interpretation Guide

Example pipeline:

Stage 1 - Qualified: 130 opportunities ($180M value)
  → 67% advance to Proposal (87 ops)
  → 33% no-bid (43 ops, $58M lost value)

Stage 2 - Proposal: 87 opportunities ($122M value)
  → 78% advance to Submitted (68 ops)
  → 22% withdrawn (19 ops, $27M lost value)

Stage 3 - Submitted: 68 bids ($95M value)
  → 35% advance to Won (24 ops)
  → 65% lost (44 ops, $61M lost value)

Stage 4 - Won: 24 contracts ($34M value)

Key insights:

1. Biggest drop-off: Qualified → Proposal (33% exit)

Why are 43 opportunities (33%) no-bid after qualification?

Possible reasons:

  • Resource constraints (can't pursue everything)
  • Better opportunities emerge (prioritization)
  • Qualification criteria too loose (poor-fit ops making it through)

Investigation: Review no-bid reasons for the 43 dropped opportunities. If most are "resource constraints," you're leaving money on table—consider expanding team. If most are "poor fit," tighten qualification criteria.

2. High conversion: Proposal → Submitted (78%)

Once you start a proposal, 78% make it to submission. Only 22% are withdrawn mid-process.

Positive sign: You're not abandoning proposals mid-stream (costly waste of effort). Your go/no-go decisions happen at Qualified stage (before investing proposal effort).

3. Moderate conversion: Submitted → Won (35%)

35% of submitted bids win contracts. This is slightly above industry average (25-30%) but below top performers (40%+).

Opportunity: Focus on improving submitted-bid competitiveness:

  • Pricing optimization (are you priced competitively?)
  • Technical approach (are proposals compelling?)
  • Past performance (do you have strong references?)

Improving Submitted → Won from 35% to 40% (+5%) would yield 3 additional wins from current 68 submitted bids (+$4-5M revenue).

4. Time in stages

Average time in stage:
- Qualified: 12 days
- Proposal: 28 days
- Submitted (awaiting award): 45 days

Total cycle time: 85 days (qualified to award decision)

Benchmark: 60-90 days is typical for government contracting.

Bottleneck: Submitted stage (45 days awaiting award) is longest. This is largely out of your control (government evaluation timeline), but you can:

  • Engage contracting officer to understand timeline
  • Use this data to forecast when to expect award decisions (critical for cash flow planning)

Pricing Analysis Widget

(Due to length constraints, I'll provide a condensed version of the remaining widgets)

Overview

The Pricing Analysis widget compares your submitted pricing to winning bids, helping you price more competitively. It answers: "Are we pricing too high, too low, or just right?"

Key metrics:

  • Average variance (your price vs. winning price)
  • Distribution (how often over vs. under)
  • Price optimization score (0-100)

Configuration:

  • Time range, include wins (yes/no), categories, value range, outlier filter

Interpretation:

  • +8% avg variance = pricing 8% over market (reduce to improve win rate)
  • -5% avg variance = pricing 5% under market (leaving money on table)
  • ±2% = optimal (competitive pricing)

Use case: Pricing strategy refinement, post-bid pricing review, category-specific pricing analysis.


Submission Timeline Widget

Overview

The Submission Timeline widget shows upcoming proposal deadlines and team capacity. It answers: "What's due soon, and can we handle the workload?"

Key features:

  • Calendar view with deadlines
  • Team member assignments
  • Capacity indicators (overcommitted, at capacity, available)
  • Risk flags (overlapping deadlines, behind schedule)

Use case: Sprint planning, capacity management, deadline risk assessment.


Effort Analysis Widget

Overview

The Effort Analysis widget tracks time and resource investment across bids. It answers: "How many hours do we invest per proposal, and what's the ROI?"

Key metrics:

  • Avg hours per proposal (by category, size, complexity)
  • Cost per bid (hours × hourly rates)
  • Win rate correlated with effort (do more hours = higher win rate?)

Interpretation:

  • 120 hours avg for IT Services proposals
  • 200 hours avg for Construction proposals
  • Weak correlation (R²=0.18) between effort and win rate → Effort is not primary driver of wins

Use case: Bid budgeting, ROI analysis, process efficiency improvement.

Success

Integration tip: Connect time tracking tools (Toggl, Harvest, Clockify) to auto-populate effort data. Manual entry is error-prone and underreports actual time invested.


Summary & Next Steps

Bid Performance widgets provide a comprehensive view of your team's effectiveness:

  • Win Rate Analysis: Scorecard for overall performance and capability assessment
  • Bid Statistics: Quick pipeline health check and conversion metrics
  • Pipeline Overview: Visual funnel for bottleneck identification
  • Pricing Analysis: Competitive pricing optimization
  • Submission Timeline: Deadline management and capacity planning
  • Effort Analysis: ROI and efficiency tracking

Continuous improvement cycle:

  1. Weekly: Check Bid Statistics and Submission Timeline (operational tracking)
  2. Monthly: Review Win Rate Analysis and Pricing Analysis (tactical optimization)
  3. Quarterly: Deep dive into Pipeline Overview and Effort Analysis (strategic process improvement)

Explore other widget categories:

Was this page helpful?

Bid Performance Widgets | Cothon Docs | Cothon