Nick Bostrom

Nick Bostrom is, with Eliezer Yud­kowsky, one of the two cofounders of the cur­rent field of value al­ign­ment the­ory. Bostrom pub­lished a pa­per singling out the prob­lem of su­per­in­tel­li­gent val­ues as crit­i­cal in 1999, two years be­fore Yud­kowsky en­tered the field, which has some­times led Yud­kowsky to say that Bostrom should re­ceive credit for in­vent­ing the Friendly AI con­cept. Bostrom is founder and di­rec­tor of the Oxford Fu­ture of Hu­man­ity In­sti­tute. He is the au­thor of the pop­u­lar book Su­per­in­tel­li­gence that cur­rently forms the best book-length in­tro­duc­tion to the field. Bostrom’s aca­demic back­ground is as an an­a­lytic philoso­pher formerly spe­cial­iz­ing in an­thropic prob­a­bil­ity the­ory and tran­shu­man­ist ethics. Rel­a­tive to Yud­kowsky, Bostrom is rel­a­tively more in­ter­ested in Or­a­cle mod­els of value al­ign­ment and in po­ten­tial ex­otic meth­ods of ob­tain­ing al­igned goals.