You Can't Objectively Compare Seven Bees to One Human
You can't make this comparison without introducing a subjective judgement.
One thing I've been quietly festering about for a year or so is the Rethink Priorities Welfare Range Report. It gets dunked on a lot for its conclusions, and I understand why. The argument deployed by individuals such as Bentham's Bulldog boils down to: “Yes, the welfare of a single bee is worth 7-15% as much as that of a human. Oh, you wish to disagree with me? You must first read this 4500-word blogpost, and possibly one or two 3000-word follow-up blogposts” Most people who argue like this are doing so in bad faith and should just be ignored.
I'm writing this as an attempt to crystallize what I think are the serious problems with this report, and with its line of thinking in general. I'll start with
Unitarianism vs Theory-Free Models
No, not the church from Unsong. From the report:
Utilitarianism, according to which you ought to maximize (expected) utility.
Hedonism, according to which welfare is determined wholly by positively and negatively valenced experiences (roughly, experiences that feel good and bad to the subject).
Valence symmetry, according to which positively and negatively valenced experiences of equal intensities have symmetrical impacts on welfare.
Unitarianism, according to which equal amounts of welfare count equally, regardless of whose welfare it is.
Now unitarianism sneaks in a pretty big assumption here when it says 'amount' of welfare. It leaves out what 'amount' actually means. Do RP actually define 'amount' in a satsifying way? No!2
You can basically skip to “The Fatal Problem” from here, but I want to go over some clarifications first.
Evolutionary Theories Mentioned in The Report
I ought to mention that, they do mention three theories about the evolutionary function of valenced experience, but these aren't relevant here, since they still don't make claims about what valence actually is. If you think they do, then consider the following three statements
It is beneficial for organisms to keep track of fitness-relevant information
It is beneficial for organisms to have a common currency for decision making
It is beneficial for organisms to label states as good or bad, so they can learn
Firstly, note that these theories aren't at all mutually exclusive and seem to be three ways of looking at the same thing. And none of them give us a way to compare valence between different organisms: for example, if we're looking at fitness-relevant information, there's no principled way to compare +5.2 expected shrimp-grandchildren with +1.5 expected pig-grandchildren.3
All of this is fine, since the evolutionary function of valence is a totally different issue to the cognitive representation of valence.
This is called the ultimate cause/proximate cause distinction and crops up all the time in evolutionary biology. An example is this:
Question: why do plants grow tall?
Proximate answer: two hormones (auxins and gibberellins) cause cells to divide and elongate, respectively
Ultimate answer: plants can get more light by growing above their neighbors, so taller plants which grow taller are favoured
The Fatal Problem
The fact that the authors of the report don't give us any proximate theories of consciousness, unfortunately, damns the whole project to h∄ll, which is where poor technical philosophies go when they make contact with reality (good technical philosophies stick around if they're true, or go to h∃aven if they're false).4
If I could summarize my biggest issue with the report, it's this:
Unitarianism smuggles in an assumption of “amount” of valence, but the authors don't define what “amount” means in any way, not even to give competing theories of how to do so.
This, unfortunately, makes the whole thing meaningless. It's all vibes! To reiterate, the central claim being made by the report is:
There is an objective thing called 'valence' which we can assign to four-volumes of spacetime using a mathematical function (but we're not going to even speculate about the function here)
Making one human brain happy (as opposed to sad) increases the valence of that human brain by one arbitrary unit per cubic-centimeter-second
On the same scale, making one bee brain happy (as opposed to sad) increases the valence of that bee brain by fifteen thousand arbitrary units per cubic-centimeter-second
I don't think there's a function I would endorse that behaves in that way.
My Position
Since I've critiqued other people's positions, I should state my own. It's polite:
I don't think there's an objective way to compare valence between different minds at all. You can anchor on neuron count and I won't criticize you, since that's at least proportional to information content, but that's still an arbitrary choice. You can claim that what you care about is a particular form of self-modelling and discount anything without a sophisticated self-model.5 All choices of moral weighting are somewhat arbitrary. All utilitarian-ish claims about morality are about assigning values to different computations, and there's not an easy way to compare the computations in a human vs a fish vs a shrimp vs a nematode. The most reasonable critiques are critiques on the marginal consistency of different worldviews. For example, a critique which values the computations going on inside all humans except for those with red hair, is fairly obviously marginally less consistent than one which makes no reference to hair colour. Whether a worldview values one bee as much as 1 human, 0.07 humans, or 1e-6 humans is primarily a matter of choice and frankly aesthetics. Just because we're throwing out objectivity, we need not throw out ‘good’ and ‘bad’ as judgements on actions or even people. A person who treats gingers badly based on an assumption like the one above can still be said to be evil.6 How much of the world you write off as evil is also an arbitrary judgement, and do not make that judgement lightly.
Image taken from https://upload.wikimedia.org/wikipedia/commons/b/b5/Honey_bee_%28Apis_mellifera%29.jpg, licensed under creative commons.
What would it even mean to do that? Suppose you were into free-energy-minimization as a form of perceptual control. You could think of the brain as carrying out a series of prediction-update cycles, where each prediction was biased by some welfare-increasing term. Then you could define the total amount of suffering in the universe as the sum over all cycles of the prediction error. You'd end up a negative utilitarian, but you could do it, and it would give you an objective way of comparing between individuals. Even if this particular example is incoherent in some ways, it does at least contain a term which can be compared between individuals.
Also, consider the normative statements we get if we start talking about moral weight:
You should care about anything which keeps track of fitness-relevant information
You should care about anything which has a common currency for decision making
You should care about anything which labels states as good or bad for learning
Now to me these are incorrect moral statements.
This doesn't actually change the previous statement, but I do find it useful when talking about morality to check every now and then with the question ' What does this imply I should care about?'
I've read chunks of the rest of the report, and it gives me an eyes-glazing-over feeling which I have recently come to recognize as a telltale sign of unclear thinking. Much of it just cites different theories with no real integration of them. I will make an exception for "Does Critical Flicker-Fusion Frequency Track The Subjective Experience of Time?" which raises a very interesting point and is worth a read, at least in part.
I currently think in terms of some combination of the two.
I think there's a Scott Alexander piece which discusses moral disagreements of this form. The conclusion was that some worldviews can be considered evil even if they're in some sense disputes about the world, if they're sufficiently poorly-reasoned.