Task identification problem
A subproblem of building a task-directed AGI (genie) is communicating to the AGI the next task and identifying which outcomes are considered as fulfilling the task. For the superproblem, see safe plan identification and verification. This seems like primarily a communication problem. It might have additional constraints associated with, e.g., the AGI being a behaviorist genie. In the known-fixed-algorithm case of AGI, it might be that we don’t have much freedom in aligning the AGI’s planning capabilities with its task representation, and that we hence need to work with a particular task representation (i.e., we can’t just use language to communicate, we need to use labeled training cases). This is currently a stub page, and is mainly being used as a parent or tag for subproblems.
- Look where I'm pointing, not at my finger
When trying to communicate the concept “glove”, getting the AGI to focus on “gloves” rather than “my user’s decision to label something a glove” or “anything that depresses the glove-labeling button”.
- Task-directed AGI
An advanced AI that’s meant to pursue a series of limited-scope goals given it by the user. In Bostrom’s terminology, a Genie.