Empirical probabilities are not exactly 0 or 1

Cromwell’s Rule in statis­tics ar­gues that no em­piri­cal propo­si­tion should be as­signed a sub­jec­tive prob­a­bil­ity of ex­actly \(0\) or \(1\) - it is always pos­si­ble to be mis­taken. (Some ar­gue that this rule should be gen­er­al­ized to log­i­cal facts as well.)

A prob­a­bil­ity of ex­actly \(0\) or \(1\) cor­re­sponds to in­finite log odds, and would re­quire in­finitely strong ev­i­dence to reach start­ing from any finite prior. To put it an­other way, if you don’t start out in­finitely cer­tain of a fact be­fore mak­ing any ob­ser­va­tions (be­fore you were born), you won’t reach in­finite cer­tainty af­ter any finite num­ber of ob­ser­va­tions in­volv­ing finite prob­a­bil­ities.

All sen­si­ble uni­ver­sal pri­ors seem so far to have the prop­erty that they never as­sign prob­a­bil­ity ex­actly \(0\) or \(1\) to any pre­dicted fu­ture ob­ser­va­tion, since their hy­poth­e­sis space is always broad enough to in­clude an imag­in­able state of af­fairs in which the fu­ture is differ­ent from the past.

If you did as­sign a prob­a­bil­ity of ex­actly \(0\) or \(1,\) you would be un­able to up­date no mat­ter how much con­trary ev­i­dence you ob­served. Prior odds of 0 : 1 (or 1 : 0), times any finite like­li­hood ra­tio, end up yield­ing 0 : 1 (or 1 : 0).

As Ra­fal Smi­grod­ski put it:

“I am not to­tally sure I have to be always un­sure. Maybe I could be le­gi­t­i­mately sure about some­thing. But once I as­sign a prob­a­bil­ity of 1 to a propo­si­tion, I can never undo it. No mat­ter what I see or learn, I have to re­ject ev­ery­thing that dis­agrees with the ax­iom. I don’t like the idea of not be­ing able to change my mind, ever.”


  • Bayesian reasoning

    A prob­a­bil­ity-the­ory-based view of the world; a co­her­ent way of chang­ing prob­a­bil­is­tic be­liefs based on ev­i­dence.