The last time I wrote about values spreading, I primarily listed reasons why we might expect existential risk reduction to have a greater impact. Now I’m going to look at why values spreading—and animal advocacy in particular—may look better.
When I developed a quantitative model for cause prioritization, the model claimed that effective animal advocacy has a greater expected impact than AI safety research. Let’s look at some qualitative reasons why the model produces this result:
- Animal advocacy has lower variance—we’re more confident that it will do a lot of good, especially in the short to medium term.
- Animal advocacy is more robustly positive—it seems unlikely to do lots of harm1, whereas the current focus of AI safety research could plausibly do harm. (This is really another way of saying that AI safety interventions have high variance.)
- The effects of animal advocacy on the far future arguably have better feedback loops.
- Animal advocacy is more robust against overconfidence in speculative arguments. I believe we ought to discount the arguments for AI safety somewhat because they rely on hard-to-measure claims about the future. We could similarly say that we shouldn’t be too confident what effect animal advocacy will have on the far future, but it also has immediate benefits. Some people put a lot of weight on this sort of argument; I don’t give it tons of weight, but I’m still wary given that people have a history of making overconfident claims about what the future will look like.
Counterfactuals matter. When you’re taking a job, you should care about who would take the job if you didn’t, and how much worse a job than you they’d do.
This matters from the other side too: employers should consider counterfactuals when deciding who to hire. Suppose you’re an employer and considering hiring a promising employee. What would a prospective employee do if you didn’t hire them? How good is it compared to working for you?
If a particular candidate cares a lot about improving the lives of sentient beings, they’d probably do something valuable even if they didn’t get hired, and this should count as a consideration against hiring them.Continue reading
Like the last time I wrote something like this, my suggestions here could apply to any large foundation. But most large foundations don’t care at all about what I say, and the Open Philanthropy Project cares at least a tiny bit about what I say, so I’m going to focus on Open Phil.
The Open Philanthropy Project ought to prioritize wild animal suffering (WAS). Here’s why:
- WAS is important and neglected.
- WAS is not tractable for most actors, but it’s tractable for Open Phil.
(Previously I discussed some of my issues with the importance/neglectedness/tractability framework, but I believe it works reasonably well for our purposes here.)
Why wild animal suffering matters
The problem of wild animal suffering has enormous scale. There exist far more sentient wild animals than there do humans or factory-farmed animals. Wild animal suffering dwarfs all other problems that currently exist. Some other problems (such as existential risk) may matter more, but WAS is certainly the biggest problem that’s happening right now.
Additionally, wild animal suffering is neglected: hardly anyone cares about this problem, and of the people who care, hardly any of them are trying to do anything about it. Animal Ethics is the only organization spending non-trivial time on the problem of wild animal suffering, and it’s a small organization with limited staff time and narrow focus–I see room for much, much more work on reducing suffering in the wild than what Animal Ethics does currently.
Why Open Phil should prioritize wild animal suffering
For people who care about animals, their biggest objection to reducing wild animal suffering is that it’s intractable. But this is mistaken: we can do lots of things right now to work toward reducing wild animal suffering. (If you doubt that we can do anything about wild animal suffering, please, please read my essay on this subject, and if you disagree, leave a comment explaining why.)
Even given the sad state of WAS research, we already have some concrete proposals for how to reduce wild animal suffering without risking big negative side effects. For example, Brian Tomasik has suggested paying farmers to use humane insecticides. Calculations suggest that this could prevent 250,000 painful deaths per dollar. This intervention alone looks much more cost-effective than GiveDirectly even if we heavily discount insects’ capacity for suffering. And this is just an initial idea; surely there exist much more effective interventions than this, and we could find them if we spent more time looking.
Reducing suffering in the wild is probably much more tractable than most people tend to think. That said, if you want to work on wild animal suffering, you either need specific relevant skills (which are rare and hard to develop) or you need to fund an organization doing relevant work; and right now Animal Ethics is the only such organization. We have something of a coordination problem here where people won’t work on wild animal suffering because they can’t get funding, and people don’t want to fund it because so few people are working on it.
What we need is a large, committed source of funding to jump-start the cause. If the Open Philanthropy Project began funding work on wild animal suffering, it could stimulate new research efforts or small-scale interventions by offering grants. Specifically, Open Phil should probably create a new focus area for wild animal suffering and possibly hire dedicated staff. This problem has such large scale, and so many possible interventions, that it absolutely deserves to be a dedicated focus area. Open Phil might consider lumping WAS under its farm animal welfare program, but this would excessively constrain its budget and limit the amount of staff time that it could receive. Wild animal suffering is a massive problem, and easily deserves as much attention as most of Open Phil’s other focus areas.
Let’s look at how we use frameworks to prioritize causes. We’ll start by looking at the commonly-used importance/neglectedness tractability framework and see why it often works well and why it doesn’t match reality. Then we’ll consider an alternative approach.
- Importance: How big is the problem?
- Neglectedness: How much work is being done on the problem already?
- Tractability: Can we make progress on the problem?
This framework acts as a useful guide to cause prioritization. Let’s look at some of its benefits and problems.Continue reading
Part of a series on quantitative models for cause selection.
In the past I’ve written qualitatively about what sorts of interventions likely have the best far-future effects. But qualitative analysis is maybe not the best way to decide this sort of thing, so let’s build some quantitative models.
I have constructed a model of various interventions and put them in a spreadsheet. This essay describes how I came up with the formulas to estimate the value of each intervention and makes a rough attempt at estimating the inputs to the formulas. For each input, I give either a mean and σ1 or an 80% confidence interval (which can be converted into a mean and σ). Then I combine them to get a mean and σ for the estimated value of the intervention.
This essay acts as a supplement to my explanation of my quantitative model. The other post explains how the model works; this one goes into the nitty-gritty details of why I set up the inputs the way I did.
Note: All the confidence intervals here are rough first attempts and don’t represent my current best estimates; my main goal is to explain how I developed the presented series of models. I use dozens of different confidence intervals in this essay, so for the sake of time I have not revised them as I changed them. To see my up-to-date estimates, see my final model. I’m happy to hear things you think I should change, and I’ll edit my final model to incorporate feedback. And if you want to change the numbers, you can download the spreadsheet and mess around with it. This describes how to use the spreadsheet.Continue reading
Part of a series on quantitative models for cause selection.
Update: There’s now a web app that can do everything the spreadsheet could and more.
Quantitative models offer a superior approach in determining which interventions to support. However, naive cost-effectiveness estimates have big problems. In particular:
- They don’t give stronger consideration to more robust estimates.
- They don’t always account for all relevant factors.
We can fix the first problem by starting from a prior distribution and updating based on evidence–more robust evidence will cause a bigger probability update. And we can fix the second problem by carefully considering the most important effects of interventions and developing a strong quantitative estimate that incorporates all of them.
So that’s what I did. I developed a quantitative model for comparing interventions and wrote a spreadsheet to implement it.Continue reading
Update 2016-12-14: GiveWell’s 2016 cost-effectiveness has updated the way it handles population ethics. It now explicitly takes the value of saving a 5-year old’s life as input and no longer assumes that it’s worth 36 life-years.
GiveWell claims that the Against Malaria Foundation (AMF) is about 10 times as cost-effective as GiveDirectly. This entails unusual claims about population ethics that I believe many people would reject, and according to other plausible views of population ethics, AMF looks less cost-effective than the other GiveWell top charities.
GiveWell’s Implicit Assumptions
A GiveWell-commissioned report suggests that population will hardly change as a result of AMF saving lives. GiveWell’s cost-effectiveness model for AMF assumes that saving one life creates about 35 quality-adjusted life years (QALYs), and uses this to assign a quantitative value to the benefits of saving a life. But if AMF causes populations to decline, that means it’s actually removing (human) QALYs from the world; so you can’t justify AMF’s purported cost-effectiveness by saying it creates more happy human life, because it doesn’t.
You could instead justify AMF’s life-saving effects by saying it’s inherently good to save a life, in which case GiveWell’s cost-effectiveness model shouldn’t interpret the value of lives saved in terms of QALYs created/destroyed, and should include a term for the inherent value of saving a life.
GiveWell claims that AMF is about 10 times more cost-effective than GiveDirectly, and GiveWell ranks AMF as its top charity partially on this basis (see “Summary of key considerations for top charities” in the linked article). This claim depends on the assumption that saving a life creates 35 QALYs.
To justify GiveWell’s cost-effectiveness analysis, you could say that it is good to cause existing people to live longer, but it is not bad to prevent people from existing. (Sean Conley of GiveWell says he and many other GiveWell staffers believe this.)
In particular, you’d have to assume that:Continue reading
Starting about seven years ago, every time I heard a song that I really liked that stuck with me for the rest of the day, I recorded it in my journal on a list of “Songs of the Day”.
This list shows how my musical tastes have shifted over the years. It isn’t entirely representative because there are plenty of songs I love that never made it onto this list.
Here’s the complete list up to the time of this writing. An asterisk means that I liked the song before it was Song of the Day, but I gained a new appreciation for it on that day. I have a corresponding Spotify playlist, although it only includes songs up to November 2015.Continue reading
Part of a series on quantitative models for cause selection.
One major reason that effective altruists disagree about which causes to support is that they have different opinions on how strong an evidence base an intervention should have. Previously, I wrote about how we can build a formal model to calculate expected value estimates for interventions. You start with a prior belief about how effective interventions tend to be, and then adjust your naive cost-effectiveness estimates based on the strength of the evidence behind them. If an intervention has stronger evidence behind it, you can be more confident that it’s better than your prior estimate.
For a model like this to be effective, we need to choose a good prior belief. We start with a prior probability distribution
P(x) gives the probability that a randomly chosen intervention1 has utility
x (for whatever metric of utility we’re using, e.g. lives saved). To determine the posterior expected value of an intervention, we combine this prior distribution with our evidence about how much good the distribution does.
For this to work, we need to know what the prior distribution looks like. In this essay, I attempt to determine what shape the prior distribution has and then estimate the values of its parameters.Continue reading
The Open Philanthropy Project has made some grants that look substantially less impactful than some of its others, and some people have questioned the choice. I want to discuss some reasons why these sorts of grants might plausibly be a good idea, and why I ultimately disagree.
I believe Open Phil’s grants on criminal justice and land use reform are much less effective in expectation1 than its grants on animal advocacy and global catastrophic risks. This would naively suggest that Open Phil should spend all its resources on these more effective causes, and none on the less effective ones. (Alternatively, if you believe that the grants on US policy do much more good than the grants on global catastrophic risk, then perhaps Open Phil should focus exclusively on the former.) There are some reasons to question this, but I believe that the naive approach is correct in the end.
Why give grants in cause areas that look much less effective than others? Why give grants to lots of cause areas rather than just a few? Let’s look at some possible explanations for these questions.Continue reading