Toxoplasmosis dilemma
The parasite toxoplasma gondii can be transmitted by cats and rats. Toxoplasma gondii can pass from cat feces to rats. Rats infected by toxoplasma gondii are less averse to cat odors, increasing the probability that they will be eaten by cats (which again transmits the parasite to cats).
Toxoplasma gondii can also be transmitted to humans, causing toxoplasmosis; the non-acute form of which is implicated in other problems like schizophrenia.
For a while, starting in 2011, it was thought that infection with toxoplasma gondii may also cause human beings to like cats more (the newspapers talked about “crazy cat lady syndrome”). More recently, these studies may have failed to replicate. But this is sufficiently lovely for reformulating an old decision-theory problem that we’re going to pretend otherwise.
Suppose the human toxoplasmosis studies had replicated. In particular, imagine that in the following experiment…
Subjects are given $10 to participate in the experiment.
Subjects are told about the general existence of toxoplasmosis.
Subjects are presented with a cute kitten that is guaranteed to not carry toxoplasmosis (it’s been tested).
Subjects are offered a chance to pet the cute kitten for five minutes, or just end the experiment there and go home.
…subjects who chose to pet the kitten were found to be 20% likely to have latent toxoplasmosis, while those who refrained from petting the kitten were 10% likely to have latent toxoplasmosis.
You are now part of a similar experiment. If you pet the kitten, an observer would conclude that your absolute risk of latent toxoplasmosis is 10% higher, the health risks of which greatly outweigh the hedonic gains of petting a kitten for five minutes. (We could postulate perhaps that petting the kitten gains you 1,000 hedonic units, while having latent toxoplasmosis will cost you 10,000,000 hedonic units. noteThe relative quantities chosen to be similar to those in Newcomb’s Problem.) On the other hand, petting the kitten cannot cause you to have toxoplasmosis—either you already have it or you don’t.
Do you pet the kitten?
insert causal diagram here
This dilemma is a revised form of the structurally similar Solomon’s Problem and the Smoking Lesion that are traditionally considered in decision theory. (The Toxoplasmosis Dilemma has a more realistic causal structure than Solomon’s Problem, where your decision to steal another person’s spouse is causally unconnected to the probability of rebellion, or the Smoking Lesion, where smoking has nothing to do with lung cancer. Since these postulates directly contradict innate causal models, they might be confusing to the unfamiliar reader.)
In the form of Solomon’s Problem, and later the Smoking Lesion, this dilemma was historically significant and influential in the invention of causal decision theory and its widespread adoption over the alternative of evidential decision theory.
Responses
Causal decision theory
Pet the cute kitten! This choice can’t cause you to get toxoplasmosis; either you already have it or you don’t. So you might as well get the 1000 hedons from petting the kitten, if that’s what you feel like; and if that causes you to update your probability afterward that you have toxoplasmosis, don’t both trying to shoot the messenger.
Causal decision theory with ratification
Update your probability of latent toxoplasmosis as soon as you notice the initial impulse to pet the kitten, instead of waiting to observe your final action. After updating, check if it still makes sense to pet the kitten according to the updated model. It will still seem to make sense, meaning the model is stable at that point, so a CDT+ratification agent will go ahead and pet the kitten.
Evidential decision theory
Don’t pet the cute kitten! That would be bad news about your probability of having latent toxoplasmosis!
Of course, if as an EDT agent you decide not to pet the cute kitten on what your internal introspection reveals to be general EDT grounds, a more sophisticated EDT agent might further reason from this news that among EDT agents faced with this dilemma, there is no correlation between kitten-petting and latent toxoplasmosis, since all EDT agents refrain (as you realize once being told your own action as news). However, this realization doesn’t imply any infinite loop. Upon being told as news that you had petted the kitten, you would expect the updated probability of having toxoplasmosis given that some EDT agents pet the kitten, which would be different from the base rate of agents having toxoplasmosis. So although a more sophisticated EDT agent might expect the base rate of toxoplasmosis from refraining to pet the kitten, rather than expecting the updated lower toxoplasmosis rate among all subjects who don’t pet the kitten, this will still be lower than the estimated rate of toxoplasmosis upon being told as news that one had petted the kitten.
Evidential decision theory with a tickle defense
Introspectively noticing an impulse to pet the cute kitten is already bad news. Actually petting the kitten isn’t any more bad news on top of that. So you might as well pet the kitten.
Logical decision theory
To first order: A logical decision theory would pet the kitten, since you couldn’t make yourself not have toxoplasmosis if the logical output of your decision algorithm changed to “don’t pet the kitten”.
On envisioning a counterfactual world where LDT agents don’t pet kittens in scenarios like these, an LDT agent expects that the choice to not pet the kitten would no longer be informative to an outside observer, and that in this counterfactual world, LDT agents would have toxoplasmosis at the base rate for all subjects in the experiment. In this counterfactual world, we know not just “I am an agent that didn’t pet the cat” but “I am an LDT agent that didn’t pet the cat”; and since, in this counterfactual world, all LDT agents don’t pet the cat, the decision to refrain should not be considered an informative tickle.
To second order, in worlds where an LDT agent does pet the cat, they should ask “Am I a typical agent that pets the cat, or an LDT agent?” If there was a control version of the experiment where subjects were exposed to a safe kitten and not told about toxoplasmosis, an ideal LDT agent that pets the kitten might expect to have latent toxoplasmosis at a rate similar to a kitten-petting subject from the control experiment; since ideal LDT is not influenced by being told about latent toxoplasmosis if this actual kitten is known to be safe. noteOf course, it could also be the case that non-ideal humans espousing LDT as a theoretical ideal are still influenced by being told about toxoplasmosis at the start of the experiment, and that thinking about this psychologically affects the degree to which a known-safe kitten seems enjoyable for petting. A CDT agent with ratification might reason similarly, since it is able to accept its own choices as news or evidence about the choices of other CDT agents, after the fact of the decision, or inside a ratification loop.
On a technical level, it’s possible that updating on observing yourself to pet the kitten might introduce difficulties into some formal LDT variants. We can imagine toxoplasmosis as a disease that influences the utility function of the agent, raising upward the amount that it enjoys petting kittens. Observing yourself to pet a kitten is informative about having toxoplasmosis because of what this tells you about your own utility function. But the algorithm \(\mathcal Q\) for functional decision theory quotes itself as \(\ulcorner \mathcal Q \urcorner\) within its definition, including its own utility function \(\mathcal U.\) So ideal FDT agents should already know their own utility functions \(\mathcal U\) and should not be able to gain more information about their source code by watching themselves pet kittens.
Perfect self-knowledge is an unrealistic assumption for human agents, but uncertain self-knowledge has yet to be formalized in logical decision theory. It’s possible that introducing a ratification-like mechanism into LDT would imply infinite loops or money-pumpable mixed strategies on other Newcomblike problem, as in CDT agents facing Death in Damascus.
tag with “open problems in LDT” and/or “uncertain self-knowledge in LDT”
Toxoplasmosis versus Newcomb’s Problem
A widespread view in contemporary (2016) decision theory is that Solomon’s Problem (to which the Toxoplasmosis Dilemma is meant to be structurally identical) is structurally the same as Newcomb’s Problem. That is, the Toxoplasmosis Dilemma and Newcomb’s Problem are alleged to have the same structure: in both cases the EDT agent insanely does what corresponds to good news, and the CDT agent sanely does what corresponds to causally potent actions. This analogous reply of EDT versus CDT to both dilemmas is part of the mainstream perception of a dichotomy between EDT and CDT around which most key issues in decision theory revolve.
Since LDT gives different analogous answers on Newcomb’s Problem versus the Toxoplasmosis Dilemma, obviously LDT would consider these problems to have importantly different structure.
One broad argument for this important structural difference, which does not rely on LDT per se, is to observe that both EDT agents and CDT agents would prefer different precommitments on Newcomb’s Problem versus Toxoplasmosis. Both EDT agents and CDT agents would prefer to precommit to one-box, although CDT agents demand that they do so before Omega scans them. Conversely, if EDT or CDT agents know in advance that they will face some type of Toxoplasmosis Dilemma, they would both prefer to commit to kitten-petting if this is what the average subject prefers (which commitment is then no longer bad news from an EDT perspective, if you know that you’re doing it because of precommitting before you saw the kitten). Similarly, a pretheoretical subject would probably also prefer different precommitments in Toxoplasmosis versus Newcomb’s Problem. This argues that the two dilemmas have baked-in some important structural difference between them, which can be exposed by asking any decision theory about its preferred precommitment, even if EDT and CDT see no relevant difference in the moment of facing the actual dilemmas.
Parents:
- Newcomblike decision problems
Decision problems in which your choice correlates with something other than its physical consequences (say, because somebody has predicted you very well) can do weird things to some decision theories.