Why I'm Prioritizing Animal-Focused Values Spreading
Part of a series for My Cause Selection 2016. For background, see my writings on cause selection for 2015 and my series on quantitative models.
The last time I wrote about values spreading, I primarily listed reasons why we might expect existential risk reduction to have a greater impact. Now I’m going to look at why values spreading—and animal advocacy in particular—may look better.
When I developed a quantitative model for cause prioritization, the model claimed that effective animal advocacy has a greater expected impact than AI safety research. Let’s look at some qualitative reasons why the model produces this result:
- Animal advocacy has lower variance—we’re more confident that it will do a lot of good, especially in the short to medium term.
- Animal advocacy is more robustly positive—it seems unlikely to do lots of harm1, whereas the current focus of AI safety research could plausibly do harm. (This is really another way of saying that AI safety interventions have high variance.)
- The effects of animal advocacy on the far future arguably have better feedback loops.
- Animal advocacy is more robust against overconfidence in speculative arguments. I believe we ought to discount the arguments for AI safety somewhat because they rely on hard-to-measure claims about the future. We could similarly say that we shouldn’t be too confident what effect animal advocacy will have on the far future, but it also has immediate benefits. Some people put a lot of weight on this sort of argument; I don’t give it tons of weight, but I’m still wary given that people have a history of making overconfident claims about what the future will look like.
Additionally, my current model does not effectively capture some considerations related to funding constraints:
- Lots of major actors have reason to want to reduce existential risk, either because they’re selfish or because they want to behave altruistically toward other humans. Fewer influential people want to help non-humans (animals or other possible sentient beings), so we have some reason to expect existential risk to become better funded in the long term.
- Existential risk funding has been rapidly increasing over the past few years and if this continues then it will soon be receiving much more funding than animal advocacy. And certain groups of animals, such as wild animals (a.k.a. most of the sentient beings that exist, did you forget about them?), receive almost no support. Even less effort is going into optimizing advocacy efforts for improving the far future.
-
The Open Philanthropy Project is ramping up funding of AI safety and considers it a top priority. Meanwhile, its animal advocacy efforts at present fairly narrowly focus on corporate campaigns and related work, which represents only a small segment of possible interventions. Open Phil may expand its activities in the future, but I believe we should not expect it to fill the funding gaps in most factory farming-related causes.
(I do still believe AI safety, and existential risk in general, has room for non-Open Phil donors, and the fact that Open Phil prioritizes AI safety does not strongly deter me from donating. This is just one piece of evidence weighing in favor of animal advocacy over existential risk.)
How choice of prior affects cause selection
Animal advocacy appears relatively likely to have a positive impact, and it could have quite a large effect on the far future. We can probably apply an even larger lever on the far future by working to prevent extinction, but here we have less certainty about how much impact we can have, or whether our work will even help more than it will hurt.
This means that, in short:
- Narrower priors tend to favor values spreading.
- Wider priors tend to favor existential risk.
(See my essay, “On Priors”, for how my model uses a prior distribution and why it matters.)
I do not have a clear sense of how wide or narrow our prior distribution ought to be. I tend to think that a prior that favors existential risk has too big a variance, but I don’t have good arguments for this or even a sense of what would constitute a good argument. The prize for convincing me to change my mind here is that I will donate about $20,000 to existential risk causes (and more in future years).
When we make arguments about how the far future matters much more than the present, we make claims involving astronomically large numbers, such that there could exist 10^46 humans per century. These numbers are so large that we have no real track record for dealing with similar quantities, so we have no reliable way to judge how good we are at it.
Why pick a cause?
It’s certainly not obvious that I should choose one cause to focus on. Last year when I was deciding where to donate, I spent time learning about organizations in a variety of causes. Perhaps that sort of approach makes sense; but of course, investigating twice as many organizations takes twice as long. This year I wanted to narrow my search space so I could spend more time on organizations within whichever cause area appeared strongest.
I can save time by narrowing my search space before investigating lots of organizations. But picking a cause early does have a downside: I don’t have as much information now as I will in the future, so perhaps I should wait before picking a cause. However, I have found that learning about organizations does not help me make a decision about high-level cause prioritization. I do not believe I get much benefit from waiting to see what evidence comes in before picking a cause; instead, I should pick a cause first, and pick organizations after. That doesn’t mean we should rush a decision about what cause to prioritize—I’ve already thought about this a lot.
So, although I still have a lot of uncertainty about what cause area to focus on, for this year I’m going to narrow my research efforts to animal-focused values spreading.
Notes
-
Certainly animal advocacy could do harm, for example by encouraging wilderness preservation. But it does appear considerably less likely to cause harm than any other cause area. ↩