C
Docs

AI Risk Predictions

Use machine learning to predict risks on new bids based on historical lessons

Updated 2026-03-3023 min read

Leverage your lessons learned database to predict risks on new opportunities using AI-powered machine learning models.

Overview

The Risk Prediction system analyzes your historical lessons learned and bid outcomes to predict the likelihood of specific risks materializing on new opportunities. It uses machine learning to identify patterns that led to past failures and flags those patterns when they appear in new bids.

Predictive Power

Organizations with 30+ documented lessons typically achieve 65-75% prediction accuracy. With 100+ lessons, accuracy can exceed 80%, making risk predictions a valuable decision-support tool.

How Risk Predictions Work

Data Training

The AI system learns from your historical data:

Training inputs:

  • Lessons learned (wins, losses, issues encountered)
  • Bid characteristics (department, value, timeline, project type)
  • Outcomes (win/loss, scores, root causes of failures)
  • Action items (what went wrong, what was fixed)

What the model learns:

  • Which combinations of factors led to wins vs. losses
  • Common failure modes for specific bid types
  • Warning signs that historically preceded problems
  • Success patterns worth replicating

Prediction Process

Machine Learning Approach

Algorithms used:

Gradient Boosting (XGBoost):

  • Primary algorithm for risk prediction
  • Handles non-linear relationships between features
  • Robust to missing data
  • Provides feature importance scores

Random Forest:

  • Ensemble method for increased accuracy
  • Cross-validation to prevent overfitting
  • Good for classification (win/loss)

Neural Networks (for large datasets > 100 lessons):

  • Deep learning for complex pattern recognition
  • Embedding layers for text analysis
  • Attention mechanisms to weight important features

Ensemble Methods:

  • Combines predictions from multiple models
  • Weighted voting based on historical accuracy
  • Increases robustness and reduces false positives

Running Risk Predictions

From an Opportunity

When viewing an opportunity in the opportunities browser:

  1. Click Analyze Risk on the opportunity detail page
  2. System extracts opportunity characteristics automatically
  3. Predictions display within 5-10 seconds

From a Bid Analysis

When analyzing a new RFP:

  1. Complete bid analysis (requirement extraction and capability matching)
  2. Click Predict Risks in the Actions menu
  3. System combines RFP requirements with opportunity metadata
  4. Enhanced predictions based on specific requirements

Manual Risk Assessment

For opportunities not in the system:

  1. Navigate to Lessons LearnedRisk Predictions
  2. Click New Risk Assessment
  3. Manually enter bid characteristics:
    • Department/client
    • Contract value
    • Project type
    • Timeline
    • Key requirements (optional but improves accuracy)
  4. Click Run Prediction

Bulk Predictions

Run predictions on multiple opportunities at once:

  1. Go to Opportunities → Select multiple opportunities
  2. Click Bulk ActionsPredict Risks
  3. System processes each opportunity and generates a comparison report

Use case: Quarterly pipeline review—run predictions on all active opportunities to prioritize which to pursue.

Understanding Prediction Results

Risk Prediction Dashboard

When predictions complete, you see:

Overall Risk Score:

  • 0-100 scale (higher = more risky)
  • Color-coded: Green (0-30), Yellow (31-60), Red (61-100)
  • Confidence interval (e.g., "65 ± 12" means likely between 53-77)

Risk Category Breakdown:

Risk CategoryProbabilityConfidenceContributing Factors
Win Probability35%High (±5%)Similar past bids: 3 losses, 2 wins
Timeline Overrun68%Medium (±15%)Complex integration, aggressive timeline
Pricing Risk52%High (±8%)Competitive market, pricing sensitivity
Technical Feasibility23%High (±10%)Similar projects succeeded
Resource Availability71%Low (±20%)Peak period, limited SME pool

Similar Past Bids:

  • Top 5 most similar opportunities from your history
  • Outcomes (win/loss) and key lessons
  • Clickable links to full lesson details

Mitigation Recommendations:

  • AI-generated suggestions based on lessons from similar bids
  • Specific action items to reduce identified risks
  • Prioritized by impact and feasibility

Risk Score Interpretation

0-30 (Low Risk - Green):

  • Opportunity aligns well with your strengths
  • Historical data suggests high success probability
  • Similar past bids were mostly wins
  • Action: Proceed with confidence, apply proven strategies

31-60 (Medium Risk - Yellow):

  • Mixed signals from historical data
  • Some concerning factors but manageable
  • Similar past bids had varied outcomes
  • Action: Proceed with caution, implement recommended mitigations

61-100 (High Risk - Red):

  • Multiple warning signs based on historical failures
  • Similar past bids were mostly losses
  • May have fundamental capability gaps
  • Action: Seriously consider no-bid, or make major strategic adjustments

Risk Scores Are Probabilities, Not Certainties

A 70% risk score means 70% probability of issues based on historical patterns. It doesn't guarantee problems—context and mitigations matter. Use predictions as decision support, not absolute truth.

Confidence Intervals

Every prediction includes a confidence interval showing uncertainty:

High Confidence (±5-10%):

  • Strong historical data for similar bids
  • Clear patterns in the data
  • Prediction is reliable

Medium Confidence (±11-20%):

  • Some historical data but limited
  • Patterns are somewhat consistent
  • Prediction is directional but not precise

Low Confidence (±21%+):

  • Very limited historical data
  • High variance in past outcomes
  • Prediction should be taken with skepticism

Example: "Win Probability: 45% ± 8% (High Confidence)"

  • Likely between 37-53% win probability
  • Prediction is fairly reliable based on historical data

"Timeline Overrun Risk: 60% ± 25% (Low Confidence)"

  • Could be anywhere from 35-85%
  • Not enough similar historical data for precise prediction
  • Use cautiously

Contributing Factors

For each risk, the system explains what factors drove the prediction:

Example:

Timeline Overrun Risk: 68% (Medium Confidence)

Contributing Factors:

  1. Similar integration projects (weight: 35%)

    • 5 of 7 past integration projects exceeded timeline
    • Average overrun: 3.2 months
    • Lesson #23: "Healthcare Integration Delayed by Vendor Dependencies"
  2. Aggressive timeline (weight: 25%)

    • Proposed timeline: 8 months
    • Historical average for similar scope: 11.5 months
    • Lesson #47: "Unrealistic Timeline Led to Quality Issues"
  3. Resource conflicts (weight: 20%)

    • Bid period overlaps with 2 other major proposals
    • Historical data shows 80% failure rate when over 2 concurrent bids
    • Lesson #61: "Concurrent Bids Stretched Team Too Thin"
  4. Legacy system integration (weight: 15%)

    • Mainframe integration required
    • 3 of 4 mainframe projects experienced delays
    • Lesson #34: "Underestimated Mainframe Complexity"
  5. Client history (weight: 5%)

    • This client has extended timelines on 2 of 3 past projects
    • Minor contributing factor

Interpretation: Multiple strong historical signals suggest high timeline risk. Address by: (1) adding 20% buffer to timeline, (2) limiting concurrent bids, (3) including discovery phase.

Risk Categories Explained

Win Probability

What it predicts: Likelihood you'll win this bid based on historical success rate with similar opportunities.

Factors considered:

  • Department/client (your win rate with this client)
  • Contract value (win rate in this price range)
  • Project type (your success with similar projects)
  • Competition level (inferred from past bids)
  • Proposal quality factors (if bid analysis is linked)

Example: "Win Probability: 42% ± 7%"

  • Similar past bids: 8 wins, 11 losses (42% win rate)
  • This aligns with your historical performance on similar opportunities

How to improve: Review lessons from wins in this category—what worked? Apply those strategies.

Timeline Overrun Risk

What it predicts: Likelihood the project will take longer than proposed timeline.

Factors considered:

  • Project complexity vs. timeline proposed
  • Historical timeline performance on similar projects
  • Resource availability during bid period
  • Client history (do they tend to extend timelines?)
  • Integration complexity (legacy systems increase risk)

Example: "Timeline Overrun Risk: 65% ± 12%"

  • 7 of 10 similar projects exceeded timeline by average of 2.8 months
  • Your proposed timeline is aggressive compared to historical data

How to mitigate:

  • Add buffer to timeline (recommendation: +20-30%)
  • Include discovery phase to reduce unknowns
  • Build in contingency for delays
  • Propose phased delivery with interim milestones

Pricing Risk

What it predicts: Likelihood your pricing will be uncompetitive or problematic.

Factors considered:

  • Historical pricing vs. win rate
  • Competitor pricing patterns (if known)
  • Client budget sensitivity
  • Market conditions
  • Scope creep risk (complex projects often exceed budget)

Example: "Pricing Risk: 58% ± 10%"

  • Similar past bids where you priced over $2M had 25% win rate
  • This client is historically price-sensitive (chose lowest bidder in 4 of 5 past awards)

How to mitigate:

  • Sharpen your pencil—look for cost efficiencies
  • Emphasize value, not just cost
  • Consider risk-based pricing (contingency for unknowns)
  • Validate pricing against market intelligence

Technical Feasibility Risk

What it predicts: Likelihood you'll encounter technical challenges during delivery.

Factors considered:

  • Your technical capability vs. requirements
  • Historical performance on similar technical challenges
  • Technology maturity (bleeding-edge tech is riskier)
  • Integration complexity
  • Team expertise availability

Example: "Technical Feasibility Risk: 35% ± 8%"

  • You've successfully delivered 6 of 7 similar technical projects
  • Team has strong expertise in required technologies
  • Low risk—proceed with confidence

How to mitigate:

  • Assign experienced SMEs to bid and delivery
  • Include proof-of-concept or pilot phase
  • Partner with specialists if capability gaps exist
  • Propose phased approach to reduce technical risk

Resource Availability Risk

What it predicts: Likelihood you won't have adequate resources (people, time, SMEs) to deliver.

Factors considered:

  • Current team utilization
  • Concurrent bids and projects
  • Availability of required SMEs
  • Hiring/ramping timeline vs. project start date
  • Historical performance when overcommitted

Example: "Resource Availability Risk: 72% ± 18%"

  • 3 concurrent bids during the same period
  • Historical data: 80% failure rate when handling 3+ concurrent bids
  • Key SMEs already allocated to other projects

How to mitigate:

  • Limit concurrent bids (no-bid on lower-priority opportunities)
  • Secure SME availability commitments before bidding
  • Partner to access additional resources
  • Propose later start date if possible
  • Consider staff augmentation or subcontracting

Requirement Complexity Risk

What it predicts: Likelihood the requirements will be more complex than they appear, leading to scope creep or delivery challenges.

Factors considered:

  • Number and type of requirements
  • Ambiguity in RFP language
  • Historical complexity surprises on similar bids
  • Client's past RFPs (did they clarify during Q&A?)

Example: "Requirement Complexity Risk: 61% ± 15%"

  • RFP contains vague language in 12 of 45 requirements
  • Similar past RFPs from this client had 30% scope creep
  • Lesson #39: "Ambiguous Requirements Led to Costly Changes"

How to mitigate:

  • Ask clarifying questions in Q&A
  • Include discovery phase to refine requirements
  • Propose fixed-price for defined scope only, T&M for ambiguous areas
  • Build contingency into timeline and budget

Client Relationship Risk

What it predicts: Likelihood of challenges due to client behavior, expectations, or history.

Factors considered:

  • Your past performance with this client
  • Client's reputation (slow decisions, scope changes, payment issues)
  • Relationship strength (new client vs. long-term partner)
  • Client turnover (new procurement team unfamiliar with you)

Example: "Client Relationship Risk: 48% ± 12%"

  • First bid with this client (unknown relationship)
  • Historical data: 40% win rate with new clients vs. 60% with repeat clients
  • No major red flags but lack of relationship is a risk

How to mitigate:

  • Invest in relationship-building (site visits, meetings, Q&A engagement)
  • Reference similar clients to build credibility
  • Propose regular check-ins and communication protocols
  • Highlight your onboarding and client success processes

Mitigation Recommendations

For each identified risk, the AI suggests specific mitigations based on lessons learned:

Recommendation Format

Risk: Timeline Overrun (68% probability)

Recommended Mitigations:

  1. Add 20-30% timeline buffer

    • From Lesson #47: "Unrealistic Timeline Led to Quality Issues"
    • Rationale: Historical overrun on similar projects averaged 25%
    • Implementation: Increase timeline from 8 months to 10 months
    • Impact: Reduces timeline risk to ~40% (estimated)
  2. Include discovery phase (2 months)

    • From Lesson #34: "Discovery Phase Reduced Integration Surprises"
    • Rationale: Reduces unknowns and refines estimates
    • Implementation: Propose Phase 0 for requirements validation and architecture design
    • Impact: Reduces timeline and technical feasibility risk
  3. Limit concurrent bids to 2 maximum

    • From Lesson #61: "Concurrent Bids Stretched Team Too Thin"
    • Rationale: Historical 80% failure rate with 3+ concurrent bids
    • Implementation: No-bid on lower-priority opportunities during this period
    • Impact: Improves resource availability and proposal quality
  4. Use 3x complexity multiplier for mainframe integration

    • From Lesson #51: "Underestimated Mainframe Integration Effort"
    • Rationale: Mainframe projects took 3x longer than modern system integrations
    • Implementation: Update estimate with realistic multiplier
    • Impact: Improves pricing and timeline realism, increases credibility

Priority: High (address mitigations 1, 2, 3 before bidding)

Mitigation Effectiveness

The system estimates how each mitigation would change the risk score:

Before Mitigations:

  • Timeline Overrun Risk: 68%
  • Win Probability: 35%

After Applying Recommended Mitigations:

  • Timeline Overrun Risk: 42% (-26 percentage points)
  • Win Probability: 52% (+17 percentage points)

Net Effect: Mitigations significantly reduce risk and improve win probability. Investment in mitigations is worthwhile.

Automated Action Item Creation

Convert mitigation recommendations into action items:

  1. Review recommended mitigations
  2. Select mitigations to implement
  3. Click Create Action Items
  4. System generates action items with owners and deadlines
  5. Track implementation progress

Example action items from above:

  • Update proposal timeline to 10 months (Owner: Proposal Manager, Due: Before RFP submission)
  • No-bid on Opportunity #3452 to free resources (Owner: Bid Manager, Due: This week)
  • Add discovery phase to scope and pricing (Owner: Solutions Architect, Due: Before RFP submission)
  • Apply 3x multiplier to mainframe integration estimate (Owner: Estimating Lead, Due: Before pricing)

Prediction Accuracy & Improvement

Tracking Accuracy

After each bid, the system compares predictions to actual outcomes:

Access: Lessons LearnedRisk PredictionsAccuracy Report

Metrics tracked:

Win/Loss Prediction Accuracy:

  • How often did the model correctly predict win vs. loss?
  • Current accuracy: 68% (baseline random would be 50%)

Risk Materialization Rate:

  • For predicted high-risk items, how often did the risk actually occur?
  • Example: Timeline overrun predicted in 10 bids, occurred in 7 (70% accuracy)

False Positives:

  • Risks predicted but didn't materialize (over-prediction)

False Negatives:

  • Risks that occurred but weren't predicted (under-prediction)

Calibration:

  • Are 70% probability predictions actually correct 70% of the time?
  • Well-calibrated models match predicted probability to actual frequency

Improving Predictions

The model improves automatically:

After each bid outcome:

  1. System records actual outcome (win/loss, issues encountered)
  2. Compares to prediction
  3. Updates model weights based on prediction error
  4. Retrains model with new data

After each new lesson:

  1. New lesson is incorporated into training data
  2. Model learns new patterns and failure modes
  3. Next predictions reflect updated knowledge

Manual feedback: Users can flag inaccurate predictions:

  • "This risk didn't materialize despite high prediction"
  • "This risk occurred but wasn't predicted"
  • System adjusts feature weights accordingly

Accuracy by Data Volume

Typical accuracy progression:

Lessons in DatabaseWin/Loss AccuracyRisk Prediction AccuracyConfidence
10-2055-60%LowLow
20-3060-65%ModerateMedium
30-5065-72%GoodMedium-High
50-10072-78%Very GoodHigh
100+78-85%ExcellentVery High

Key takeaway: Predictions become reliable around 30 lessons, highly accurate around 50-100 lessons. If you have < 20 lessons, use predictions directionally but don't over-rely on them.

Model Performance Dashboard

Access: Lessons LearnedRisk PredictionsModel Performance

Visualizations:

Accuracy Over Time: Chart showing prediction accuracy improving as more lessons are added.

Confusion Matrix: 2x2 grid showing:

  • True Positives: Predicted win, actually won
  • False Positives: Predicted win, actually lost
  • True Negatives: Predicted loss, actually lost
  • False Negatives: Predicted loss, actually won

Risk Calibration Plot: Scatter plot: predicted risk probability vs. actual occurrence rate

  • Well-calibrated models align along diagonal
  • Shows if model over-predicts or under-predicts risk

Feature Importance: Which factors matter most in predictions?

  • Example: "Department" has 25% importance, "Contract Value" has 18%, etc.
  • Helps understand what drives outcomes

Advanced Features

Custom Risk Models

Build specialized prediction models for specific scenarios:

Example custom models:

  • High-Value Bids (>$5M) - Trained only on large bids
  • Department-Specific (e.g., ISED-only) - Specializes in one client
  • Technical Projects - Focuses on technical feasibility
  • Fast-Track Bids - Short timeline opportunities

How to create:

  1. SettingsLessons LearnedRisk PredictionsCustom Models
  2. Click Create Custom Model
  3. Name the model (e.g., "ISED High-Value Bids")
  4. Define filter criteria (department, value range, category)
  5. System trains model on filtered subset of lessons
  6. Use custom model for relevant opportunities

When to use: If you have 50+ lessons and want specialized models for distinct bid types.

Ensemble Predictions

Combine multiple models for increased accuracy:

Ensemble approach:

  • Gradient Boosting model (primary)
  • Random Forest model (secondary)
  • Neural Network model (if 100+ lessons)
  • Custom models (if applicable)

Weighted voting: Each model gets a vote, weighted by its historical accuracy on similar bids.

Result: More robust predictions, less vulnerable to model quirks or overfitting.

Enable: SettingsRisk PredictionsUse Ensemble Models (default: ON)

Sensitivity Analysis

Test how prediction changes if assumptions change:

Example: "What if we add a discovery phase?"

  • Adjust timeline from 8 months to 10 months (2-month discovery)
  • Rerun prediction
  • See how risk scores change

Use case: Evaluate different strategic options before committing to a bid approach.

Access: On the prediction results page, click What-If Analysis → Adjust parameters → Recalculate

Comparative Predictions

Compare risk across multiple opportunities:

Access: Opportunities → Select multiple → Compare Risks

Output: Side-by-side risk comparison table:

OpportunityWin ProbabilityTimeline RiskPricing RiskOverall RiskRecommendation
ISED Cloud Migration52%42%48%MediumProceed with mitigations
DND Cybersecurity38%68%55%HighConsider no-bid
PSPC Integration61%35%40%LowStrong opportunity
Health Data Analytics45%51%62%MediumProceed if pricing improved

Use case: Quarterly pipeline review—prioritize opportunities with best risk profile.

Integrating Predictions into Workflow

During Opportunity Evaluation

Go/No-Go Decision:

  1. Review opportunity details
  2. Run risk prediction
  3. Evaluate risk scores and confidence
  4. Review similar past bids and lessons
  5. Assess mitigation feasibility
  6. Make go/no-go decision informed by data

Decision framework:

Risk ScoreConfidenceDecision
0-30 (Low)AnyStrong go—allocate resources
31-60 (Medium)HighGo with mitigations—address predicted risks
31-60 (Medium)LowUncertain—gather more info, revisit decision
61-100 (High)HighLikely no-bid unless strategic imperative
61-100 (High)LowUncertain—prediction unreliable, use judgment

During Bid Preparation

Risk-Informed Strategy:

  1. Review risk predictions at bid kickoff
  2. Assign mitigation actions to specific team members
  3. Build mitigations into proposal (e.g., discovery phase, phased delivery)
  4. Reference lessons from similar past bids
  5. Validate assumptions that drove predictions

Proposal Sections:

  • Risk Management: Address predicted risks explicitly in proposal
  • Methodology: Incorporate mitigations (phased delivery, discovery, etc.)
  • Past Performance: Highlight successes from similar bids to build credibility

After Bid Submission

Prediction Validation:

  1. Record actual outcome (win/loss)
  2. Document which predicted risks materialized
  3. Note if any unexpected issues occurred (false negatives)
  4. System automatically updates model with feedback

Lesson Creation: Use prediction results as a starting point for lessons:

  • "Predicted timeline risk materialized—add this to lessons learned"
  • "Predicted low risk but encountered issues—understand why model missed it"

Continuous Improvement Loop

Frequently Asked Questions

Limitations and Caveats

Sample Size Dependency

Issue: Small datasets lead to unreliable predictions and overfitting.

Mitigation:

  • The system displays confidence intervals—use them to calibrate trust
  • For < 30 lessons, use predictions directionally only
  • Focus on accumulating quality lessons before relying heavily on predictions

Data Quality Dependency

Issue: "Garbage in, garbage out"—inaccurate or incomplete lessons lead to bad predictions.

Mitigation:

  • Implement review gates to ensure lesson quality
  • Periodically audit lessons for accuracy
  • Update lessons when new information emerges

Context Blindness

Issue: The model doesn't know about external factors not captured in lessons (market shifts, organizational changes, new regulations).

Mitigation:

  • Use predictions as input to human judgment, not replacement
  • Add context in manual risk assessment when running predictions
  • Update lessons to capture new environmental factors

Correlation vs. Causation

Issue: Model identifies correlations (phased delivery correlated with wins) but can't prove causation.

Mitigation:

  • Treat predictions as hypotheses to test, not facts
  • Consider alternative explanations (e.g., phased delivery used selectively on the right opportunities)
  • Use A/B testing when possible to validate causal relationships

Black Box Risk

Issue: Complex models (especially neural networks) can be hard to interpret.

Mitigation:

  • Use interpretable models (gradient boosting) as primary
  • Provide "Contributing Factors" explanations for every prediction
  • Allow users to question and provide feedback on predictions

Overfitting

Issue: Model learns noise in the training data rather than generalizable patterns.

Mitigation:

  • Cross-validation during training
  • Regularization techniques
  • Ensemble methods to average out overfitting
  • Monitor performance on new data vs. training data

Best Practices

Trust but Verify

Use predictions as one input among many:

  • Expert judgment
  • Market intelligence
  • Client relationships
  • Strategic priorities
  • Resource availability

Don't blindly follow predictions—understand the reasoning.

Invest in Lesson Quality

Better data → Better predictions:

  • Capture lessons promptly and accurately
  • Include detailed context and root cause analysis
  • Link lessons to actual bid data (not just anecdotes)
  • Update lessons when new information emerges

Start Simple

Phase 1: Accumulate lessons (no predictions)

  • Build database of 30-50 quality lessons
  • Establish lesson creation habits

Phase 2: Basic predictions (win/loss only)

  • Start using predictions for go/no-go decisions
  • Build trust in the system

Phase 3: Advanced predictions (multi-risk models)

  • Expand to detailed risk predictions
  • Use for strategy development and mitigation planning

Provide Feedback

Close the loop:

  • Record actual outcomes after each bid
  • Note which predictions were accurate vs. inaccurate
  • Create lessons that incorporate prediction insights

Benefits:

  • Model learns and improves
  • Team builds trust in the system
  • Predictions become more accurate over time

Communicate Uncertainty

When sharing predictions with stakeholders:

  • Always include confidence intervals
  • Explain what the prediction is based on (similar past bids)
  • Clarify that predictions are probabilities, not certainties
  • Provide context and recommendations, not just numbers

Bad: "The model says we'll lose this bid."

Good: "Based on 8 similar past bids (3 wins, 5 losses), the model predicts 38% win probability with high confidence. However, if we implement the recommended mitigations (phased delivery approach, discovery phase), win probability could improve to 52%."

Next Steps

Now that you understand risk predictions:

Success

Run your first risk prediction on an upcoming opportunity. Review the similar past bids and recommended mitigations—even if you don't act on them, the insights build intuition.

Was this page helpful?

AI Risk Predictions | Cothon Docs | Cothon