home

author: niplav, created: 2023-12-04, modified: 2023-12-04, language: english, status: maintenance, importance: 3, confidence: certain

I remain unconvinced by preference utilitarianism. Here's why.

Arguments Against Preference Utilitarianism

$$\text{argmax} \sum ☺-☹$$

—Anders Sandberg, FHI Final Report p. 52, 2024

Preference utilitarianism enjoys great popularity among utilitarians, and I tend to agree that it is a very good pragmatic compromise especially in the context of politics.

However, most formulations I have encountered bring up some problems that I have not seen mentioned or addressed elsewhere.

The Identification Argument

One issue with preference utilitarianism concerns the word “preference”, and especially where in the world these preferences are located and how they can be identified. What kinds of physical structures can be identified as having preferences (we might call this the identification problem), and where exactly are those preferences located (one might call this the location problem)? If one is purely behavioristic about this question, then every physical system can be said to have preferences, with the addition that if it is in equilibrium, it seems to have achieved those prefereneces. This is clearly nonsensical, as also explored in Filan 2018.

If we argue that this is pure distinction-mongering, and that we "know an agent when we see one", it might still be argued that evolution is agent-like enough to fall into our category of an agent, but that we are not necessarily obligated to spend a significant part of our resources on copying and storing large amounts of DNA molecules.

Even restricting ourselves to humans, we still have issue with identifying the computation inside human brains that could be said to be those preferences, see e.g. Hayden & Niv 2021. If we instead go with revealed preferences, unless we assume a certain level of irrationality, we wouldn't be able to ascertain which preferences of humans were not fulfilled (since we could just assume that at each moment, each human is perfectly fulfilling their own preferences).

These are, of course, standard problems in value learning Soares 2018.

Preference-Altering Actions Disallowed

Even if agents bearing preferences can be identified and the preferences they bear can be located, ethical agents are faced with a dubious demand: Insofar only the preferences of existing agents matter (i.e. our population axiology is person-affecting), the ethical agent is forced to stabilize existing consistent prefereneces (and perhaps also to make inconsistent preferences consistent), because every stable preference implies a "meta-preference" of its own continued existence Omohundro 2008.

However, this conflicts with ethical intuitions: We would like to allow ethical patients to undergo moral growth and reflect on their values.

(I do not expect this to be a practical issue, since at least in human brains, I expect there to be no actually consistent internal preferences. With simpler organisms or very simple physical systems, this might become an issue, but one wouldn't expect them to have undergone significant moral growth in any case.)

Possible People

If we allow the preferences of possible people to influence our decision procedure, we run into trouble very quickly.

In the most realistic case, imagine we can perform genetic editing (or embryo selection) to select for traits in new humans, and assume that the psychological profile of people who really want to have been born is at least somewhat genetically determined, and we can identify and modify those genes. (Alternatively, imagine that we have found out how to raise people so that they have a great preference for having been born, perhaps by an unanticipated leap in developmental psychology).

Then it seems like preference utilitarianism that includes possible people demands that we try to grow humanity as quickly as possible, with most people being modified in such a way that they strongly prefer being alive and having been born (if they are unusually inept in one or more ways, we would like to have some people around who can support them).

However, this preference for having been born doesn't guarantee an enjoyment of life in the commonsense way. It might be that while such people really prefer being alive, they're not really happy while being alive. Indeed, since most of the time the tails come apart, I would make the guess that those people wouldn't be much happier than current humans (an example of causal Goodhart).

Preference utilitarians who respect possible preferences might just bite this bullet and argue that this indeed the correct thing to do.

But, depending on the definition of an ethical patient who displays preferences, the moral patient who maximally prefers existing might look nothing like a typical human, and more like an intricate e-coli-sized web of diamond or a very fast rotating blob of strange matter. The only people I can imagine willing to bite this bullet probably are too busy running around robbing ammunition stores.

Side-Note: Philosophers Underestimate the Strangeness of Maximization

Often in arguments with philosophers, especially about consequentialism, I find that most of them underappreciate the strangeness of results of very strong optimization algorithms. Whenever there's an $\text{argmax}$ in your function, the result is probably going to look nothing like what you imagine it looking like, especially if the optimization doesn't have conservative concept boundaries.

Preference-Creating Preferences

If you restrict your preference utilitarianism to currently existing preferences, you might get lucky and avoid this kind of scenario. But also maybe you won't: If there are any currently existing preferences of the form P="I want there to be as many physically implemented instances of P to exist as possible" (these are possible to represent as quines), you have two choices:

In the latter case, you land in a universe filled with physical systems implementing the preference P.

Summary

All forms of preference utilitarianism face the challenge of identifying which systems have preferences, and how those preferences are implemented.

See Also