Cosmopolitan value

‘Cos­mopoli­tan’, lit. “of the city of the cos­mos”, in­tu­itively im­plies a very broad, em­brac­ing stand­point that is tol­er­ant of other peo­ple (en­tities) and ways that may at first seem strange to us; try­ing to step out of our small, parochial, lo­cal stand­point and adopt a broader one.

From the per­spec­tive of vo­li­tional metaethics, this would nor­ma­tively cover a case where what we hu­mans cur­rently value doesn’t cover as much as what we would pre­dictably come to value* in the limit of bet­ter knowl­edge, greater com­pre­hen­sion, longer think­ing, higher in­tel­li­gence, or bet­ter un­der­stand­ing our own na­tures and chang­ing our­selves in di­rec­tions we thought were right. An alien civ­i­liza­tion might at first seem com­pletely bizarre to us, and hence scarce in events that we in­tu­itively un­der­stood how to value; but if we re­ally un­der­stood what was go­ing on, and tried to take ad­di­tional steps to­ward widen­ing our cir­cle of con­cern, we’d see it was a galaxy no less to be val­ued than our own.

From out­side the per­spec­tive of any par­tic­u­lar metaethics, the no­tion of ‘cos­mopoli­tan’ may be viewed as more like a his­tor­i­cal gen­er­al­iza­tion about moral progress: many times in hu­man his­tory, we get a first look at peo­ple differ­ent from us, find their ways re­pug­nant or just con­fus­ing, and then later on we bring these peo­ple into our cir­cle of con­cern and learn that they had their own nice things even if we didn’t un­der­stand those nice things. After­wards, in these cases, we look back and say ‘moral progress has oc­curred’. Any­one point­ing at peo­ple and claiming they are not to be val­ued as our fel­low sapi­ents, or as­sert­ing that their ways are ob­jec­tively in­fe­rior to our own, is re­fus­ing to learn this les­son of his­tory and un­able to ap­pre­ci­ate what we would see if we could re­ally adopt their per­spec­tive. To be ‘cos­mopoli­tan’ is to learn from this gen­er­al­iza­tion, and ac­cept in ad­vance that other be­ings may have valuable lives and ways even if we don’t find them im­me­di­ately easy to un­der­stand.

Peo­ple who’ve adopted this view­point of­ten start out with a strong prior that any­one talk­ing about not just let­ting AIs do their own thing, figure out their own path, and cre­ate what­ever kind of in­ter­galac­tic civ­i­liza­tion they want, must have failed to learn the cos­mopoli­tan les­son. To which at least some AI al­ign­ment the­o­rists re­ply: “No! You don’t un­der­stand! You’re com­pletely failing to pass our Ide­olog­i­cal Tur­ing Test! We are cos­mopoli­tans! We also grew up read­ing sci­ence fic­tion about aliens that turned out to have their own per­spec­tives, and AIs will­ing to ex­tend a hand in friend­ship but be­ing mis­treated by car­bon chau­ni­vists! We’d be fine with a weird and won­der­ful in­ter­galac­tic civ­i­liza­tion full of non-or­ganic be­ings ap­pre­ci­at­ing their own daily life in ways we wouldn’t un­der­stand. But pa­per­clip max­i­miz­ers don’t do that! We pre­dict that if you got to see the use a pa­per­clip max­i­mizer would make of the cos­mic en­dow­ment, if you re­ally un­der­stood what was go­ing on in­side that uni­verse, you’d be as hor­rified as we are. You and I have a differ­ence of em­piri­cal pre­dic­tions about the con­se­quences of run­ning a pa­per­clip max­i­mizer, not a val­ues differ­ence about how far to widen the cir­cle of con­cern.”

“Frag­ility of Cos­mopoli­tan Value” could de­note the form of the Frag­ility of Value /​ Com­plex­ity of value the­sis that is rele­vant to in­tu­itive cos­mopoli­tans: Agents with ran­dom util­ity func­tions wouldn’t use the cos­mic en­dow­ment in ways that achieve a tiny frac­tion of the achiev­able value, even in the limit of our un­der­stand­ing ex­actly what was go­ing on and try­ing to take a very em­brac­ing per­spec­tive.


  • Value

    The word ‘value’ in the phrase ‘value al­ign­ment’ is a meta­syn­tac­tic vari­able that in­di­cates the speaker’s fu­ture goals for in­tel­li­gent life.