Coordinative AI development hypothetical

A sim­plified/​eas­ier hy­po­thet­i­cal form of the known al­gorithm non­re­cur­sive path within the Value achieve­ment dilemma. Sup­pose there was an effec­tive world gov­ern­ment with effec­tive mon­i­tor­ing of all com­put­ers; or that for what­ever other imag­i­nary rea­son rogue AI de­vel­op­ment pro­jects were sim­ply not a prob­lem. What would the ideal re­search tra­jec­tory for that world look like?


  • High­light /​ flag where safety short­cuts are be­ing taken be­cause we live in the non-ideal case.

  • Let us think through what a max­i­mally safe de­vel­op­ment path­way would look like, and why, with­out stop­ping ev­ery 30 sec­onds to think about how we won’t have time. This may un­cover valuable re­search paths that could, on a sec­ond glance, be done more quickly.

  • Think through a sim­pler case of a re­search-pro­gram-gen­er­a­tor that has fewer desider­ata and hence less cog­ni­tive dis­trac­tions.


  • AI alignment

    The great civ­i­liza­tional prob­lem of cre­at­ing ar­tifi­cially in­tel­li­gent com­puter sys­tems such that run­ning them is a good idea.

  • Value achievement dilemma

    How can Earth-origi­nat­ing in­tel­li­gent life achieve most of its po­ten­tial value, whether by AI or oth­er­wise?