Random utility function

A ‘ran­dom util­ity func­tion’ is a util­ity func­tion se­lected ac­cord­ing to some sim­ple prob­a­bil­ity mea­sure over a log­i­cal space of for­mal, com­pact speci­fi­ca­tions of util­ity func­tions.

For ex­am­ple: sup­pose util­ity func­tions are speci­fied by com­puter pro­grams (e.g. a pro­gram that maps an out­put de­scrip­tion to a ra­tio­nal num­ber). We then draw a ran­dom com­puter pro­gram from the stan­dard uni­ver­sal prior on com­puter pro­grams: \(2^{-\operatorname K(U)}\) where \(\operatorname K(U)\) is the al­gorith­mic com­plex­ity (Kol­mogorov com­plex­ity) of the util­ity-spec­i­fy­ing pro­gram \(U.\)

This ob­vi­ous mea­sure could be amended fur­ther to e.g. take into ac­count non-halt­ing pro­grams; to not put al­most all of the prob­a­bil­ity mass on ex­tremely sim­ple pro­grams; to put a satis­fic­ing crite­rion on whether it’s com­pu­ta­tion­ally tractable and phys­i­cally pos­si­ble to op­ti­mize for \(U\) (as as­sumed in the Orthog­o­nal­ity Th­e­sis); etcetera.

Com­plex­ity of value is the the­sis that the at­tain­able op­ti­mum of a ran­dom util­ity func­tion has near-null good­ness with very high prob­a­bil­ity. That is: the at­tain­able op­ti­mum con­figu­ra­tions of mat­ter for a ran­dom util­ity func­tion are, with very high prob­a­bil­ity, the moral equiv­a­lent of pa­per­clips. This in turn im­plies that a su­per­in­tel­li­gence with a ran­dom util­ity func­tion is with very high prob­a­bil­ity the moral equiv­a­lent of a pa­per­clip max­i­mizer.

A ‘ran­dom util­ity func­tion’ is not:

  • A util­ity func­tion ran­domly se­lected from what­ever dis­tri­bu­tion of util­ity func­tions may ac­tu­ally ex­ist among agents within the gen­er­al­ized uni­verse. That is, a ran­dom util­ity func­tion is not the util­ity func­tion of a ran­dom ac­tu­ally-ex­ist­ing agent.

  • A util­ity func­tion with max­en­tropy con­tent. That is, a ran­dom util­ity func­tion is not one that in­de­pen­dently as­signs a uniform ran­dom value be­tween 0 and 1 to ev­ery dis­t­in­guish­able out­come. (This util­ity func­tion would not be tractable to op­ti­mize for—we couldn’t op­ti­mize it our­selves even if some­body paid us—so it’s not cov­ered by e.g. the Orthog­o­nal­ity Th­e­sis.)

Parents:

  • Paperclip maximizer

    This agent will not stop un­til the en­tire uni­verse is filled with pa­per­clips.