Why a single exit number is almost always wrong — and what to use instead

Every valuation method the standard literature offers produces the same thing: one number. The VC Method gives you one exit value. Comps give you one multiple. The First Chicago Method gives you three scenarios — but still collapses them into a single weighted estimate. The problem is not the methods themselves. The problem is the single-point output, because exit outcomes are not normally distributed around a mean.

Exit outcomes follow a log-normal distribution — highly skewed right, with a long tail of exceptional outcomes and a compressed left tail of sub-$10M exits. A single point estimate, however carefully constructed, hides almost all the information that matters for assessing whether a return thesis is achievable. It tells you nothing about the probability attached to that number.

Why exit values cannot be modelled with a single multiple

A company's exit value is the product of many multiplicative factors — market size, penetration rate, revenue multiple, timing premium, competitive dynamics at exit. When you multiply many independent random variables together, the result is log-normally distributed regardless of what the individual distributions look like. This is not an assumption imposed on the data. It is a mathematical consequence of how exit values are generated — and it is confirmed empirically when you plot real exits on a probit log-scale chart and observe a straight line.

How to read the full distribution of historical exit outcomes for your vertical

The probit log-scale chart is the diagnostic tool that makes this framework visual. The x-axis shows exit value on a logarithmic scale — spanning $1M to $100B, which is the realistic range of venture exits. The y-axis shows cumulative probability on a probit scale, which has one important property: a log-normal distribution plots as a straight line. When real exit data lies close to that line, the log-normal model fits well. When it deviates, the model shows you exactly where and by how much.

Fitting a maximum likelihood log-normal curve to real exits in a vertical gives you three numbers derived from actual outcomes, not from a comp table: P10 — the exit value that 90% of companies in this cohort have exceeded. P50 — the median exit, the base case grounded in historical data. P90 — the exit value only the top decile achieves. These are not scenarios you construct. They are what the data says.

These three numbers tell a fundamentally richer story than any revenue multiple. A fund whose thesis requires P90 outcomes to return 3x is a structurally different — and higher risk — proposition than one whose P50 outcome delivers the same return. The multiple doesn't tell you which one you're building.

Why the same exit looks very different depending on when you invested

The distribution is not static. It shifts based on three primary inputs:

The Critical Question

Does your investment thesis depend on a P50 outcome, or a P90 outcome? If your fund needs the company to exit above $1B to return your check at 3x, and the P90 for that vertical at that stage is $800M, your thesis is structurally dependent on an outcome that exceeds 90% of historical exits. That is not necessarily a reason not to invest — but it must be explicit, not hidden inside a "base case" that nobody believes.

How the exit distribution is calibrated to real data — not assumed

Maximum likelihood estimation finds the log-normal parameters — μ and σ of the underlying normal distribution of log-exit-values — that make the observed historical exit data most probable. Applied to a dataset of real exits in a vertical, it produces a fitted probit line through the data cloud. Companies above the line outperformed the fitted distribution at their exit value. Companies below it underperformed. The line itself is not a forecast — it is the best characterisation of the distribution from which exits in this cohort are drawn.

This is what distinguishes the approach from comparable transactions. Comps give you a handful of recent deals and a range. MLE fitting gives you the full distributional shape of hundreds of exits — including the tail behaviour that comps systematically underrepresent because the biggest outcomes are rare and often excluded from comp sets.

Is your return thesis P50-dependent, or are you betting on a tail?

Thesis is P50-grounded: Your return profile works at median historical outcomes for this vertical and stage. This is a defensible, well-constructed investment. Verify that the P10 scenario — which will occur roughly one time in ten — is survivable at fund level without material damage to the portfolio.

Thesis requires P75+: You need an above-median outcome. Acceptable — many strong investments sit here — but the Pv × Pc exit likelihood score and CGI growth trajectory must provide a specific, documented basis for why this company is positioned above the median. "We think it's exceptional" is not documentation.

Thesis requires P90+: You are betting on a tail outcome — one that only the top 10% of historical exits in this vertical have reached. This is not automatically wrong: some funds are explicitly tail-hunting and size their portfolio construction accordingly. But it must be stated explicitly. An IC should never approve a deal whose return thesis is tail-dependent without every partner in the room understanding and acknowledging that dependency.