Immediate goods

One of the po­ten­tial views on ‘value’ in the value al­ign­ment prob­lem is that what we should want from an AI is a list of im­me­di­ate goods or out­come fea­tures like ‘a cure for can­cer’ or ‘let­ting hu­mans make their own de­ci­sions’ or ‘pre­vent­ing the world from be­ing wiped out by a pa­per­clip max­i­mizer’. (Im­me­di­ate Goods as a crite­rion of ‘value’ isn’t the same as say­ing we should give the AI those ex­plicit goals; call­ing such a list ‘value’ means it’s the real crite­rion by which we should judge how well the AI did.)

Arguments

Im­ma­tu­rity of view de­duced from pres­ence of in­stru­men­tal goods

It seems un­der­stand­able that Im­me­di­ate Goods would be a very com­mon form of ex­pressed want when peo­ple first con­sider the value al­ign­ment prob­lem; they would look for valuable things an AI could do.

But such a quickly pro­duced list of ex­pressed wants will of­ten in­clude in­stru­men­tal goods rather than ter­mi­nal goods. For ex­am­ple, a can­cer cure is (pre­sum­ably) a means to the end of healthier or hap­pier hu­mans, which would then be the ac­tual grounds on which the AI’s real-world ‘value’ was eval­u­ated from the hu­man speaker’s stand­point. If the AI ‘cured can­cer’ in some tech­ni­cal sense that didn’t make peo­ple healthier, the origi­nal per­son mak­ing the wish would prob­a­bly not see the AI as hav­ing achieved value.

This is a rea­son for sus­pect­ing the ma­tu­rity of such ex­pressed views, and to sus­pect that the stated list of im­me­di­ate goods will prob­a­bly evolve into a more ter­mi­nal view of value from a hu­man stand­point, given fur­ther re­flec­tion.

Moot­ness of immaturity

Ir­re­spec­tive of the above, so far as tech­ni­cal is­sues like Edge In­stan­ti­a­tion are con­cerned, the ‘value’ vari­able could still ap­ply to some­one’s spon­ta­neously pro­duced list of im­me­di­ate wants, and that all the stan­dard con­se­quences of the value al­ign­ment prob­lem usu­ally still ap­ply. It means we can im­me­di­ately say (hon­estly) that e.g. Edge In­stan­ti­a­tion would be a prob­lem for what­ever want the speaker just ex­pressed, with­out need­ing to per­suade them to some other stance on ‘value’ first. Since the same tech­ni­cal prob­lems will ap­ply both to the im­ma­ture view and to the ex­pected ma­ture view, we don’t need to dis­pute the view of ‘value’ in or­der to take it at face value and hon­estly ex­plain the stan­dard tech­ni­cal is­sues that would still ap­ply.

Mo­ral im­po­si­tion of short horizons

Ar­guably, a list of im­me­di­ate goods may make some sense as a stop­ping-place for eval­u­at­ing the perfor­mance of the AI, if ei­ther of the fol­low­ing con­di­tions ob­tain:

  • There is much more agree­ment (among pro­ject spon­sors or hu­mans gen­er­ally) about the good­ness of the in­stru­men­tal goods, than there is about the ter­mi­nal val­ues that make them good. E.g., twenty pro­ject spon­sors can all agree that free­dom is good, but have nonover­lap­ping con­cepts about why it is good, and it is hy­po­thet­i­cally the case that these peo­ple would con­tinue to dis­agree in the limit of in­definite de­bate or re­flec­tion. Then if we want to col­lec­tivize ‘value’ from the stand­point of the pro­ject spon­sors for pur­poses of talk­ing about whether the AI method­ol­ogy achieves ‘value’, maybe it would just make sense to talk about how much (in­tu­itively eval­u­ated) free­dom the AI cre­ates.

  • It is in some sense morally in­cum­bent upon hu­man­ity to do its own think­ing about long-term out­comes and achieve them through im­me­di­ate goods, or it is in some sense morally in­cum­bent for hu­man­ity to ar­rive at long-term out­comes via its own de­ci­sions or op­ti­miza­tion start­ing from im­me­di­ate goods. In this case, it might make sense to see the ‘value’ of the AI as be­ing re­al­ized only in terms of the AI get­ting to those im­me­di­ate goods, be­cause it would be morally wrong for there to be op­ti­miza­tion by the AI of con­se­quences be­yond that.

To the knowl­edge of Eliezer Yud­kowsky as of May 2015, nei­ther of these views have yet been ad­vo­cated by any­one in par­tic­u­lar as a defense of an im­me­di­ate-goods the­ory of value.

Parents:

  • Value

    The word ‘value’ in the phrase ‘value al­ign­ment’ is a meta­syn­tac­tic vari­able that in­di­cates the speaker’s fu­ture goals for in­tel­li­gent life.