A “meta-utility function” is a preference framework built by composing multiple simple utility functions into a more complicated structure. The point might be, e.g., to describe an that optimizes different utility functions depending on whether a switch is pressed, or an agent that learns a ‘correct’ utility function by observing data informative about some ideal target. For central examples see Utility indifference and Moral uncertainty.
- Preference framework
What’s the thing an agent uses to compare its preferences?