# Expected utility agent

An Expected utility agent has some way of consistently scoring all the possible outcomes of its actions, like assigning 20 points to saving a burning orphanage. The agent weighs its actions by estimating the average expected score of an action’s consequences. For example, an action with a 50% chance of leading to an outcome with utility 20, a 25% chance of leading to an outcome with utility 35, and a 25% chance of leading to an outcome with utility 45, would have an expected utility of 30. These utilities can potentially reflect any sort of morality or values—selfishness, altruism, or paperclips. Several famous mathematical theorems suggest that if you can’t be viewed as some type of expected utility agent, you must be going in circles, making bad bets, or exhibiting other detrimental behaviors. Several famous experiments show that human beings do exhibit those behaviors, and can’t be viewed as expected utility agents.

(Alexei: Is this line necessary if we have the summary paragraph visible?) An expected utility agent is an agent whose decision rule treats two actions equivalently whenever they have the same \expected_utility.

write longer explanation of expected utility, the consequences of the assumption, and an introduction.

Parents:

• It’s easy to equivocate between “can be viewed as” and “is.” Indeed, any rational agent “can be viewed as” an expected utility maximizer, but it need not have any internal architecture resembling such a maximizer. And in particular, the utility-function-being-maximized need not be represented explicitly.

Most of the actual oomph from decreeing something an expected utility maximizer seems to come from these additional assumptions, which aren’t delivered by the relevant theorems. All the theorems give you is a characterization of the agent’s attitude towards uncertainty (and so e.g. they have no content when there is no uncertainty).

(I expect the author doesn’t often make this mistake, but it is pretty common in the broader LessWrong crowd.)