Preference framework

A ‘prefer­ence frame­work’ refers to a fixed al­gorithm that up­dates, or po­ten­tially changes in other ways, to de­ter­mine what the agent prefers for ter­mi­nal out­comes. ‘Prefer­ence frame­work’ is a term more gen­eral than ‘util­ity func­tion’ which in­cludes struc­turally com­pli­cated gen­er­al­iza­tions of util­ity func­tions.

As a cen­tral ex­am­ple, the util­ity in­differ­ence pro­posal has the agent switch­ing be­tween util­ity func­tions \(U_X\) and \(U_Y\) de­pend­ing on whether a switch is pressed. We can call this meta-sys­tem a ‘prefer­ence frame­work’ to avoid pre­sum­ing in ad­vance that it em­bod­ies a VNM-co­her­ent util­ity func­tion.

An even more gen­eral term would be de­ci­sion al­gorithm which doesn’t pre­sume that the agent op­er­ates by prefer­ring out­comes.

Children:

  • Moral uncertainty

    A meta-util­ity func­tion in which the util­ity func­tion as usu­ally con­sid­ered, takes on differ­ent val­ues in differ­ent pos­si­ble wor­lds, po­ten­tially dis­t­in­guish­able by ev­i­dence.

  • Meta-utility function

    Prefer­ence frame­works built out of sim­ple util­ity func­tions, but where, e.g., the ‘cor­rect’ util­ity func­tion for a pos­si­ble world de­pends on whether a but­ton is pressed.

  • Attainable optimum

    The ‘at­tain­able op­ti­mum’ of an agent’s prefer­ences is the best that agent can ac­tu­ally do given its finite in­tel­li­gence and re­sources (as op­posed to the global max­i­mum of those prefer­ences).

Parents: