Task identification problem

A sub­prob­lem of build­ing a task-di­rected AGI (ge­nie) is com­mu­ni­cat­ing to the AGI the next task and iden­ti­fy­ing which out­comes are con­sid­ered as fulfilling the task. For the su­per­prob­lem, see safe plan iden­ti­fi­ca­tion and ver­ifi­ca­tion. This seems like pri­mar­ily a com­mu­ni­ca­tion prob­lem. It might have ad­di­tional con­straints as­so­ci­ated with, e.g., the AGI be­ing a be­hav­iorist ge­nie. In the known-fixed-al­gorithm case of AGI, it might be that we don’t have much free­dom in al­ign­ing the AGI’s plan­ning ca­pa­bil­ities with its task rep­re­sen­ta­tion, and that we hence need to work with a par­tic­u­lar task rep­re­sen­ta­tion (i.e., we can’t just use lan­guage to com­mu­ni­cate, we need to use la­beled train­ing cases). This is cur­rently a stub page, and is mainly be­ing used as a par­ent or tag for sub­prob­lems.

Children:

  • Look where I'm pointing, not at my finger

    When try­ing to com­mu­ni­cate the con­cept “glove”, get­ting the AGI to fo­cus on “gloves” rather than “my user’s de­ci­sion to la­bel some­thing a glove” or “any­thing that de­presses the glove-la­bel­ing but­ton”.

Parents:

  • Task-directed AGI

    An ad­vanced AI that’s meant to pur­sue a se­ries of limited-scope goals given it by the user. In Bostrom’s ter­minol­ogy, a Ge­nie.