Likelihood function

Let’s say you have a piece of evidence \(e\) and a set of hypotheses \(\mathcal H.\) Each \(H_i \in \mathcal H\) assigns some likelihood to \(e.\) The function \(\mathcal L_{e}(H_i)\) that reports this likelihood for each \(H_i \in \mathcal H\) is known as a “likelihood function.”

For example, let’s say that the evidence is \(e_c\) = “Mr. Boddy was killed with a candlestick,” and the hypotheses are \(H_S\) = “Miss Scarlett did it,” \(H_M\) = “Colonel Mustard did it,” and \(H_P\) = “Mrs. Peacock did it.” Furthermore, if Miss Scarlett was the murderer, she’s 20% likely to have used a candlestick. By contrast, if Colonel Mustard did it, he’s 10% likely to have used a candlestick, and if Mrs. Peacock did it, she’s only 1% likely to have used a candlestick. In this case, the likelihood function is

$$\mathcal L_{e_c}(h) = \begin{cases} 0.2 & \text{if $h = H_S$} \\ 0.1 & \text{if $h = H_M$} \\ 0.01 & \text{if $h = H_P$} \\ \end{cases} $$

For an example using a continuous function, consider a possibly-biased coin whose bias \(b\) to come up heads on any particular coinflip might be anywhere between \(0\) and \(1\). Suppose we observe the coin to come up heads, tails, and tails. We will denote this evidence \(e_{HTT}.\) The likelihood function over each hypothesis \(H_b\) = “the coin is biased to come up heads \(b\) portion of the time” for \(b \in [0, 1]\) is:

$$\mathcal L_{e_{HTT}}(H_b) = b\cdot (1-b)\cdot (1-b).$$

There’s no reason to normalize likelihood functions so that they sum to 1 — they aren’t probability distributions, they’re functions expressing each hypothesis’ propensity to yield the observed evidence. For example, if the evidence was really obvious ($e_s$ = “the sun rose this morning,”) it might be the case that almost all hypotheses have a very high likelihood, in which case the sum of the likelihood function will be much more than 1.

Likelihood functions carry absolute likelihood information, and therefore, they contain information that relative likelihoods do not. Namely, absolute likelihoods can be used to check a hypothesis for strict confusion.

Parents: