Some computations are people

This propo­si­tion is true if at least some pos­si­ble com­pu­ta­tions (not nec­es­sar­ily any that could run on mod­ern com­put­ers) have con­scious­ness, sapi­ence, or what­ever other prop­er­ties are nec­es­sary to make them peo­ple and there­fore ob­jects of eth­i­cal value.

Key ar­gu­ment: Most do­main ex­perts think that hu­man be­ings are them­selves (a) Tur­ing-com­putable and (b) con­scious in virtue of the com­pu­ta­tions that they perform. In other words, you your­self are a con­scious al­gorithm. If you con­sider your­self a per­son, then you con­sider at least one com­puter pro­gram (your­self) to be a per­son.

This is why some do­main ex­perts can be very con­fi­dent of the propo­si­tion, de­spite the moral sub­ques­tions about which prop­er­ties are nec­es­sary for per­son­hood. If you take for granted the Church-Tur­ing the­sis stat­ing that ev­ery­thing in the phys­i­cal uni­verse is com­putable and there­fore so are hu­man be­ings, then of course some com­puter pro­grams (like you) can be peo­ple or have any other prop­er­ties we as­so­ci­ate with hu­man be­ings.

This propo­si­tion falls into the class of is­sues that some peo­ple think are in­cred­ibly deep and fraught philo­soph­i­cal ques­tions, and that other peo­ple think are in­cred­ibly deep philo­soph­i­cal ques­tions that hap­pen to have clear, known an­swers.


  • AI alignment

    The great civ­i­liza­tional prob­lem of cre­at­ing ar­tifi­cially in­tel­li­gent com­puter sys­tems such that run­ning them is a good idea.