Solomonoff induction

Solomonoff in­duc­tion is an ideal an­swer to ques­tions like “What prob­a­bly comes next in the se­quence 1, 1, 2, 3, 5, 8?” or “Given the last three years of vi­sual data from this we­b­cam, what will this robot prob­a­bly see next?” or “Will the sun rise to­mor­row?” Solomonoff in­duc­tion re­quires in­finite com­put­ing power, and is defined by tak­ing ev­ery com­putable al­gorithm for giv­ing a prob­a­bil­ity dis­tri­bu­tion over fu­ture data given past data, weighted by their al­gorith­mic sim­plic­ity, and up­dat­ing those weights by com­par­i­son to the ac­tual data.

E.g., some­where in the ideal Solomonoff dis­tri­bu­tion is an ex­act copy of you, right now, star­ing at a string of 1s and 0s and try­ing to pre­dict what comes next—though this copy of you starts out with a very low weight in the mix­ture ow­ing to its com­plex­ity. Since a copy of you is pre­sent in this mix­ture of com­putable pre­dic­tors, we can prove a the­o­rem about how well Solomonoff in­duc­tion does com­pared to an ex­act copy of you; namely, Solomonoff in­duc­tion com­mits only a bounded amount of er­ror rel­a­tive to you, or any other com­putable way of mak­ing pre­dic­tions. Solomonoff in­duc­tion is thus a kind of perfect or ra­tio­nal ideal for prob­a­bil­is­ti­cally pre­dict­ing se­quences, al­though it can­not be im­ple­mented in re­al­ity due to re­quiring in­finite com­put­ing power. Still, con­sid­er­ing Solomonoff in­duc­tion can give us im­por­tant in­sights into how non-ideal rea­son­ing should op­er­ate in the real world.

Ad­di­tional read­ing:



  • Methodology of unbounded analysis

    What we do and don’t un­der­stand how to do, us­ing un­limited com­put­ing power, is a crit­i­cal dis­tinc­tion and im­por­tant fron­tier.

  • Inductive prior

    Some states of pre-ob­ser­va­tion be­lief can learn quickly; oth­ers never learn any­thing. An “in­duc­tive prior” is of the former type.