Behavioral Biases in Forecasting and Scenario Analysis (CFA Level 1): Key Behavioral Biases, Overconfidence Bias, and Anchoring Bias. Key definitions, formulas, and exam tips.
So, you’ve just wrapped up your initial forecasts for a brand-new financial model—complete with projected sales, expenses, and multiple lines of assumptions. Feels good, right? Maybe you have that slight tingle of excitement. I’ve been there, sitting in a conference room, turning a swirl of sales projections, marketing estimates, and cost data into a neat, color-coded spreadsheet. But here’s the thing: the moment we get comfortable is usually when behavioral biases sneak in. It’s almost comical how easily overconfidence or anchoring can sabotage our well-intentioned financial projections.
In this section, we’ll discuss some of the most common behavioral biases that can derail even the most capable analyst. We’ll also explore practical ways to mitigate them, with scenario analysis as one of the most powerful tools in the financial modeler’s arsenal. This discussion will tie closely to other chapters—particularly when we want to stress test (Chapter 14) or do sensitivity analysis (Chapter 16.5)—because an accurate forecast often means weaving together both human judgment and systematic checks.
Behavioral biases are those pesky tendencies we all have that color our judgments. They might prompt us to overvalue information we like, or cling too tightly to an initial guess. Let’s look at how each of these biases might show up in forecasting and scenario analysis.
Overconfidence bias refers to overestimating the accuracy of your data or your ability to predict outcomes. It’s like saying, “My forecast is definitely right,” even in the face of uncertain market conditions. Overconfident analysts might:
I remember once forecasting monthly revenue growth for a tech startup that had just launched a new subscription service. I was so convinced by the team’s hype and early traction that I neglected the possibility of a big slowdown post-launch. Sure enough, that slowdown happened. Overconfidence can happen to all of us—especially if we’re proud of our data or the line of logic we used to arrive at the forecast.
Anchoring bias happens when we fixate too heavily on initial information—like the first data point or a prior period’s results—and fail to adequately adjust for new insights. If you see an initial company valuation at $500 million, you might inadvertently cling to that number, even if subsequent analysis suggests the business is worth closer to $400 million.
In financial modeling, anchoring can happen if we base revenue assumptions on last year’s 10% growth, then apply a small 1–2% tweak without truly challenging whether that historical growth is relevant for the future. Or we anchor on last quarter’s operating margin, ignoring major changes in input costs. The residual effect of the anchor can be so strong that we fail to incorporate the broader economic environment or changes in the competitive landscape.
Confirmation bias is the classic “I only want to see what makes me right” phenomenon. We tend to select evidence that supports our hypothesis—say, we believe a new product will be a hit—while minimizing or ignoring signs of trouble. If you’ve built a scenario stating the product will grow 50% annually, you might ignore contrary signs like surveys showing most consumers prefer a competitor’s product or that churn metrics are creeping upward.
In forecasting, confirmation bias manifests as selective acceptance of positive data—such as strong pilot-program results—and downplaying contradictory data, like high customer acquisition costs. This bias is dangerous because it can create a one-sided gathering of evidence, rendering the entire forecast’s foundation shaky.
Anyone who has taken a broad look at corporate bankruptcies, missed earnings targets, or big M&A flops will see examples of these biases in action. Overconfidence might cause a firm to overpay for an acquisition. Anchoring might keep an analyst from revising earnings expectations, even after negative news. Confirmation bias might lead a CFO to adopt only the best-case scenario, ignoring signals of an economic downturn.
From an exam perspective—especially in the context of the CFA curriculum—recognizing these biases and explaining how to mitigate them can be tested in both item-set and short-answer questions. The real-world lesson is straightforward: If you can’t spot these biases in your own analysis, your carefully built model can be humdrum at best, and dangerously misleading at worst.
So how can we combat these biases without turning our day-to-day work into a never-ending self-doubt spiral? Here are a few practical suggestions:
Scenario analysis is a powerful technique to guard against biases while also providing richer insight into how your financial results might vary under different conditions. Rather than relying on a single point estimate, you create multiple “what if” versions of your model. Typically, you’ll see at least three:
If you want to get fancy, you might assign probabilities to each scenario. For instance, if you believe the chance of the pessimistic scenario is 30%, you can estimate an expected value for certain performance metrics. There’s no strict rule on how many scenarios you should build. Some analysts do five or more (e.g., best-case, near-best, base, near-worst, worst-case). The key is to vary your assumptions enough to reveal a range of plausible outcomes.
Below is a simplified flowchart showing the scenario analysis steps:
flowchart LR
A["Define Key Variables"] --> B["Identify Range of Possible Values"]
B["Identify Range of Possible Values"] --> C["Construct Distinct Scenarios<br/> (Base, Best, Worst)"]
C["Construct Distinct Scenarios<br/> (Base, Best, Worst)"] --> D["Analyze Financial Impact<br/> and Sensitivity"]
D["Analyze Financial Impact<br/> and Sensitivity"] --> E["Incorporate Feedback<br/> and Adjust Assumptions"]
Let’s do a quick hypothetical:
Imagine you’re forecasting net income for a mid-sized manufacturing company that produces electric motors. You’ve got last year’s net income of $25 million, and you’re initially projecting 10% growth because, well, it grew 10% last year. That’s the anchor. Double-check if you’re possibly falling for anchoring bias. Maybe raw material costs are expected to increase. Perhaps a competitor has launched a cheaper motor overseas. A contrarian scenario might set growth at just 1%. Meanwhile, a bullish scenario might assume rapid sales expansion to new overseas markets, pegging growth at 15%.
Your scenario outcomes might look like this:
Even a quick table like that can spark important discussions. Are we overestimating the best-case scenario because we’re overconfident in global expansion? Are we ignoring data that suggests an economic slowdown in the biggest market?
It’s usually helpful to have at least one scenario that feels, well, a bit painful. That’s your contrarian scenario. I’ve been in strategy meetings where someone who’s a pro at playing devil’s advocate almost always ended up injecting more realism into the forecasts. By systematically challenging the consensus, you reveal hidden assumptions and reduce groupthink.
If we cross-reference “contrarian perspectives” with risk management frameworks taught throughout the CFA Program, you’ll see repeatedly how challenging group assumptions is one of the best ways to spot blind spots. This approach is also relevant for income tax forecasting (Chapter 8) when tax rates or deferred liabilities could unexpectedly change. Meanwhile, for companies with global operations (Chapter 11), exchange rates can shift drastically—perfect fodder for contrarian scenario planning.
In scenario analysis, the biggest risk is building multiple scenarios but still failing to challenge the underlying biases. For instance, if you anchor on last quarter’s performance, you might set growth rates that differ by only a small margin across best vs. worst scenarios. Overconfidence might creep in if you treat your base-case scenario with too much certainty. And confirmation bias can prompt you to find reasons why your best-case scenario is actually “most likely,” even if the data doesn’t support that conclusion.
Some finance teams employ Python (or R, or specialized forecasting software) to generate Monte Carlo simulations. For instance, you could set up a quick script:
1import numpy as np
2
3growth_rates = np.random.normal(loc=0.06, scale=0.02, size=10000) # mean 6%, stdev 2%
4cost_inflation = np.random.normal(loc=0.02, scale=0.01, size=10000) # mean 2%, stdev 1%
5starting_net_income = 25_000_000
6
7results = starting_net_income * (1 + growth_rates - cost_inflation)
8
9mean_income = np.mean(results)
10worst_case = np.percentile(results, 5)
11best_case = np.percentile(results, 95)
12
13print(f"Mean net income: ${mean_income:,.0f}")
14print(f"Worst-case (5th percentile): ${worst_case:,.0f}")
15print(f"Best-case (95th percentile): ${best_case:,.0f}")
Though this snippet is just a simplified illustration, it demonstrates how you can model thousands of “mini-scenarios.” You can incorporate other variables (like foreign exchange rates, interest rates, or commodity prices) to build a richer picture. By quantifying many outcomes, you naturally confront your biases because the simulation exposes a distribution of possible results, not just a single guess.
While forecasting is usually forward-looking, many of our assumptions tie back to historical financial statements. Overconfidence may cause us to ignore subtle red flags in the balance sheet (Chapter 3) or income statement (Chapter 2). Confirmation bias can lead us to accept questionable revenue recognition (Chapter 2.1) at face value. Anchoring bias might result in misjudging the effect of new accounting rules (like IFRS vs. US GAAP differences, Chapter 1.4). Realistically, robust scenario analysis helps us remain vigilant about possible challenges in the firm’s reported financials.
From an exam standpoint, you should be prepared to:
Remember, the CFA exams often feature scenario-based questions that test your ability to spot biases within a narrative. Practice reading through a fictional CFO’s statements, identifying overly optimistic or anchored forecasts, and formulating an unbiased approach.
Important Notice: FinancialAnalystGuide.com provides supplemental CFA study materials, including mock exams, sample exam questions, and other practice resources to aid your exam preparation. These resources are not affiliated with or endorsed by the CFA Institute. CFA® and Chartered Financial Analyst® are registered trademarks owned exclusively by CFA Institute. Our content is independent, and we do not guarantee exam success. CFA Institute does not endorse, promote, or warrant the accuracy or quality of our products.