Strategic AGI typology

A list of ad­vanced agent types, in classes broad enough to cor­re­spond to differ­ent strate­gic sce­nar­ios—AIs that can do differ­ent things, can only be built un­der differ­ent cir­cum­stances, or are only de­sir­able given par­tic­u­lar back­ground as­sump­tions. This ty­pol­ogy isn’t meant to be ex­haus­tive.

Children:

  • Known-algorithm non-self-improving agent

    Pos­si­ble ad­vanced AIs that aren’t self-mod­ify­ing, aren’t self-im­prov­ing, and where we know and un­der­stand all the com­po­nent al­gorithms.

  • Autonomous AGI

    The hard­est pos­si­ble class of Friendly AI to build, with the least moral haz­ard; an AI in­tended to nei­ther re­quire nor ac­cept fur­ther di­rec­tion.

  • Task-directed AGI

    An ad­vanced AI that’s meant to pur­sue a se­ries of limited-scope goals given it by the user. In Bostrom’s ter­minol­ogy, a Ge­nie.

  • Oracle

    Sys­tem de­signed to safely an­swer ques­tions.

Parents:

  • AI alignment

    The great civ­i­liza­tional prob­lem of cre­at­ing ar­tifi­cially in­tel­li­gent com­puter sys­tems such that run­ning them is a good idea.