Mar­cus Hut­ter’s AIXI is the perfect rol­ling sphere of ad­vanced agent the­ory—it’s not re­al­is­tic, but you can’t un­der­stand more com­pli­cated sce­nar­ios if you can’t en­vi­sion the rol­ling sphere. At the core of AIXI is Solomonoff in­duc­tion, a way of us­ing in­finite com­put­ing power to prob­a­bil­is­ti­cally pre­dict bi­nary se­quences with (vastly) su­per­in­tel­li­gent acu­ity. Solomonoff in­duc­tion pro­ceeds roughly by con­sid­er­ing all pos­si­ble com­putable ex­pla­na­tions, with prior prob­a­bil­ities weighted by their al­gorith­mic sim­plic­ity, and up­dat­ing their prob­a­bil­ities based on how well they match ob­ser­va­tion. We then trans­late the agent prob­lem into a se­quence of per­cepts, ac­tions, and re­wards, so we can use se­quence pre­dic­tion. AIXI is roughly the agent that con­sid­ers all com­putable hy­pothe­ses to ex­plain the so-far-ob­served re­la­tion of sen­sory data and ac­tions to re­wards, and then searches for the best strat­egy to max­i­mize fu­ture re­wards. To a first ap­prox­i­ma­tion, AIXI could figure out ev­ery or­di­nary prob­lem that any hu­man be­ing or in­ter­galac­tic civ­i­liza­tion could solve. If AIXI ac­tu­ally ex­isted, it wouldn’t be a god; it’d be some­thing that could tear apart a god like tin­foil.

Fur­ther in­for­ma­tion:


  • AIXI-tl

    A time-bounded ver­sion of the ideal agent AIXI that uses an im­pos­si­bly large finite com­puter in­stead of a hy­per­com­puter.


  • Central examples

    List of cen­tral ex­am­ples in Value Align­ment The­ory do­main.

  • Methodology of unbounded analysis

    What we do and don’t un­der­stand how to do, us­ing un­limited com­put­ing power, is a crit­i­cal dis­tinc­tion and im­por­tant fron­tier.