# Inductive prior

An “inductive prior” is a state of belief, before seeing any evidence, which is conducive to learning when the evidence finally appears. A classic example would be observing a coin come up heads or tails many times. If the coin is biased to come up heads ^{1}⁄_{4} of the time, the inductive prior from Laplace’s Rule of Succession will start predicting future flips to come up tails with ^{3}⁄_{4} probability. The maximum entropy prior for the coin, which says that every coinflip has a 50% chance of coming up heads and that all sequences of heads and tails are equally probable, will never start to predict that the next flip will be heads, even after observing the coin come up heads thirty times in a row.

The prior in Solomonoff induction is another example of an inductive prior—far more powerful, far more complicated, and entirely unimplementable on physically possible hardware.

Children:

- Solomonoff induction
A simple way to superintelligently predict sequences of data, given unlimited computing power.

- Laplace's Rule of Succession
Suppose you flip a coin with an unknown bias 30 times, and see 4 heads and 26 tails. The Rule of Succession says the next flip has a

^{5}⁄_{32}chance of showing heads. - Universal prior
A “universal prior” is a probability distribution containing

*all*the hypotheses, for some reasonable meaning of “all”. E.g., “every possible computer program that computes probabilities”.

Parents:

- Ignorance prior
Key equations for quantitative Bayesian problems, describing exactly the right shape for what we believed before observation.