Happiness maximizer

It is some­times pro­posed that we build an AI in­tended to max­i­mize hu­man hap­piness. (One early pro­posal sug­gested that AIs be trained to rec­og­nize pic­tures of peo­ple with smil­ing faces and then to take such rec­og­nized pic­tures as re­in­forcers, so that the grown ver­sion of the AI would value hap­piness.) There’s a lot that would allegedly pre­dictably go wrong with an ap­proach like that.

  • in tu­to­rial page?

  • the ‘ar­gu­ment path’ from smiley faces to plea­sure to hap­piness to ‘true hap­piness’ to DWIM with the ‘just’ fad­ing at each step