Humean degree of freedom

A “Humean de­gree of free­dom” ap­pears in a cog­ni­tive sys­tem when­ever some quan­tity /​ la­bel /​ con­cept de­pends on the choice of util­ity func­tion, or more gen­er­al­ized prefer­ences. For ex­am­ple, the no­tion of “im­por­tant im­pact on the world” de­pends on which vari­ables, when they change, im­pact some­thing the sys­tem cares about, so if you tell an AI “Tell me about any im­por­tant im­pacts of this ac­tion”, you’re ask­ing it to do a calcu­la­tion that de­pends on your prefer­ences, which might have high com­plex­ity and be difficult to iden­tify to the AI.

Parents:

  • Reflectively consistent degree of freedom

    When an in­stru­men­tally effi­cient, self-mod­ify­ing AI can be like X or like X’ in such a way that X wants to be X and X’ wants to be X’, that’s a re­flec­tively con­sis­tent de­gree of free­dom.