Utility function

A utility function is an abstract way of describing the relative degree to which an agent prefers or disprefers certain outcomes, by assigning an abstract score, the utility, to each outcome.

For example, let’s say that an agent’s utility function:

  • Assigns utility 5 to eating vanilla ice cream.

  • Assigns utility 8 to eating chocolate ice cream.

  • Assigns utility 0 to eating no ice cream at all.

This tells us that if we offer the agent choices like:

  • Choice A: 50% probability of no ice cream, 50% probability of chocolate ice cream

  • Choice B: 100% probability of vanilla ice cream.

  • Choice C: 30% probability of no ice cream, 70% probability of chocolate ice cream

…then the agent will prefer B to A and C to B, since the respective expected utilities are:

$$\begin{array}{rl} 0.5 \cdot €0 + 0.5 \cdot €8 \ &= \ €4 \\ 1.0 \cdot €5 \ &= \ €5 \\ 0.3 \cdot €0 + 0.7 \cdot €8 \ &= \ €5.6 \end{array}$$

Observe that we could multiply all the utilities above by 2, or 12, or add 5 to all of them, without changing the agent’s behavior. What the above utility function really says is:

“The interval from vanilla ice cream to chocolate ice cream is 60% of the size of the interval from no ice cream to vanilla ice cream, and the sign of both intervals is positive.”

These relative intervals don’t change under positive affine transformations (adding a real number or multiplying by a positive real number), so utility functions are equivalent up to a positive affine transformation.

Confusions to avoid

The agent is not pursuing chocolate ice cream in order to get some separate desideratum called ‘utility’. Rather, this notion of ‘utility’ is an abstract measure of how strongly the agent pursues chocolate ice cream, relative to other things it pursues.

Contemplating how utility functions stay the same when multiplied by 2 helps to emphasize:

  • Utility isn’t a solid entity; there’s no invariant way of saying “how much utility” an agent scored over the course of its life. (We could just as easily say it scored twice as much utility.)

  • Utility measures an agent’s relative preferences; it’s not something an agent wants instead of other things. We could as easily describe everything’s relative value by describing each thing’s value relative to eating a scoop of chocolate ice cream—so without introducing any separate unit of ‘utility’.

  • An agent doesn’t need to mentally represent a ‘utility function’ in order for the agent’s behavior to be consistent with that utility function. In the case above, the agent could actually want chocolate ice cream at €8.1 and it would express the same visible preferences of A < B < C. That is, its behavior could be viewed as consistent with either of those two utility functions, and maybe the agent doesn’t explicitly represent any utility function at all.

Some other potential confusions to avoid:

• To say that we can talk about an agent behaving consistently with some utility function(s), does not say anything about what the agent wants. There’s no sense in which the theory of expected utility, by itself, mandates that chocolate ice cream must have more utility than vanilla ice cream.

• The expected utility formalism is hence something entirely different from utilitarianism, a separate moral philosophy with a confusingly neighboring name.

• Expected utility doesn’t say anything about needing to value each additional unit of ice cream, or each additional dollar, by the same amount. We can easily have scenarios like:

  • Eat 1 unit of vanilla ice cream: €5.

  • Eat 2 units of vanilla ice cream: €7.

  • Eat 3 units of vanilla ice cream: €7.5.

  • Eat 4 units of vanilla ice cream: €3 (because stomachache).

That is: consistent utility functions must be consistent in how they value complete final outcomes rather than how they value different marginal added units of ice cream.

Similarly, there is no rule that a gain of $200,000 has to be assigned twice the utility of a gain of $100,000, and indeed this is generally not the case in real life. People have diminishing returns on money; the richer you already are, the less each additional dollar is worth.

This in turn implies that the expected money of a gamble will usually be different from its expected utility.

For example: Most people would prefer (A) a certainty of $1,000,000 to (B) a 50% chance of $2,000,010 and a 50% chance of nothing; since the second $1,000,010 will have substantially less further value to them than the first $1,000,000. The utilities of $0, $1,000,000, and $2,000,010 might be something like €0, €1, and €1.2.

Thus gamble A has higher expected utility than gamble B, even though gamble B leads to a higher expectation of gain in dollars (by a margin of $5). There’s no useful concept corresponding to “the utility of the expectation of the gain”; what we want is “the expectation of the utility of the gain”.

• Conversely, when we talk about utilities, we are talking about the unit we use to measure diminishing returns. By the definition of utility, a gain that you assign +€10 (relative to some baseline alternative) is something you want twice as much as a gain you assign +€5. It doesn’t make any sense to imagine diminishing returns on utility as if utility were a separate good rather than being the measuring unit of returns.

If you claim to assign gain X an expected utility of +€1,000,000, then you must want it a million times as much as some gain Y that you assign an expected utility interval of +€1. You are claiming that you’d trade a certainty of X for a 1 in 999,999 chance at gaining Y. If that’s not true, then you either aren’t a consistent expected utility agent (admittedly likely) or you don’t really value X a million times as much as Y (also likely). If ordinary gains are in the range of €1 then the notion of a gain of +€1,000,000 is far more startling than talking about a mere gain of a million dollars.

Motivations for utility

Various coherence theorems show that if your behavior can’t be viewed as coherent with some consistent utility function over outcomes, you must be using a dominated strategy. Conversely if you’re not using a dominated strategy, we can interpret you as acting as if you had a consistent utility function. See this tutorial.

Parents: