Relative likelihood

Rel­a­tive like­li­hoods ex­press how rel­a­tively more likely an ob­ser­va­tion is, com­par­ing one hy­poth­e­sis to an­other. For ex­am­ple, sup­pose we’re in­ves­ti­gat­ing the mur­der of Mr. Boddy, and we find that he was kil­led by poi­son. The sus­pects are Miss Scar­lett and Colonel Mus­tard. Now, sup­pose that the prob­a­bil­ity that Miss Scar­lett would use poi­son, if she were the mur­derer, is 20%. And sup­pose that the prob­a­bil­ity that Colonel Mus­tard would use poi­son, if he were the mur­derer, is 10%. Then, Miss Scar­lett is twice as likely to use poi­son as a mur­der weapon as Colonel Mus­tard. Thus, the “Mr. Boddy was poi­soned” ev­i­dence sup­ports the “Scar­lett” hy­poth­e­sis twice as much as the “Mus­tard” hy­poth­e­sis, for rel­a­tive like­li­hoods of \((2 : 1).\)

Th­ese like­li­hoods are called “rel­a­tive” be­cause it wouldn’t mat­ter if the re­spec­tive prob­a­bil­ities were 4% and 2%, or 40% and 20% — what mat­ters is the rel­a­tive pro­por­tion.

Rel­a­tive like­li­hoods may be given be­tween many differ­ent hy­pothe­ses at once. Given the ev­i­dence \(e_p\) = “Mr. Boddy was poi­soned”, it might be the case that Miss Scar­lett, Colonel Mus­tard, and Mrs. White have the re­spec­tive prob­a­bil­ities 20%, 10%, and 1% of us­ing poi­son any time they com­mit a mur­der. In this case, we have three hy­pothe­ses — \(H_S\) = “Scar­lett did it”, \(H_M\) = “Mus­tard did it”, and \(H_W\) = “White did it”. The rel­a­tive like­li­hoods be­tween them may be writ­ten \((20 : 10 : 1).\)

In gen­eral, given a list of hy­pothe­ses \(H_1, H_2, \ldots, H_n,\) the rel­a­tive like­li­hoods on the ev­i­dence \(e\) can be writ­ten as a scale-in­var­i­ant list of the like­li­hoods \(\mathbb P(e \mid H_i)\) for each \(i\) from 1 to \(n.\) In other words, the rel­a­tive like­li­hoods are

$$ \alpha \mathbb P(e \mid H_1) : \alpha \mathbb P(e \mid H_2) : \ldots : \alpha \mathbb P(e \mid H_n) $$

where the choice of \(\alpha > 0\) does not change the value de­noted by the list (i.e., the list is scale-in­var­i­ant). For ex­am­ple, the rel­a­tive like­li­hood list \((20 : 10 : 1)\) above de­notes the same thing as the rel­a­tive like­li­hood list \((4 : 2 : 0.20)\) de­notes the same thing as the rel­a­tive like­li­hood list \((60 : 30 : 3).\) This is why we call them “rel­a­tive like­li­hoods” — all that mat­ters is the ra­tio be­tween each term, not the ab­solute val­ues.

Any two terms in a list of rel­a­tive like­li­hoods can be used to gen­er­ate a like­li­hood ra­tio be­tween two hy­pothe­ses. For ex­am­ple, above, the like­li­hood ra­tio \(H_S\) to \(H_M\) is 21, and the like­li­hood ra­tio of \(H_S\) to \(H_W\) is 201. This means that the ev­i­dence \(e_p\) sup­ports the “Scar­lett” hy­poth­e­sis 2x more than it sup­ports the “Mus­tard” hy­poth­e­sis, and 20x more than it sup­ports the “White” hy­poth­e­sis.

Rel­a­tive like­li­hoods sum­ma­rize the strength of the ev­i­dence rep­re­sented by the ob­ser­va­tion that Mr. Boddy was poi­soned — un­der Bayes’ rule, the ev­i­dence points to Miss Scar­lett to the same de­gree whether the ab­solute prob­a­bil­ities are 20% vs. 10%, or 4% vs. 2%.

By Bayes’ rule, the way to up­date your be­liefs in the face of ev­i­dence is to take your prior odds and sim­ply mul­ti­ply them by the cor­re­spond­ing rel­a­tive like­li­hood list, to ob­tain your pos­te­rior odds. See also Bayes’ rule: Odds form.

Parents: