Is Preventing Global Poverty Good?

This essay could perhaps be called, “is improving people’s economic station good?” because all the arguments presented here apply just as well to people in the developed world as to people in the developing world. But my readers generally don’t spend a lot of effort trying to make people in developed countries wealthier, and lots of them do do a lot to make the globally poor better off.

When people calculate the effects of efforts to alleviate global poverty, they tend to report figures in terms like QALYs/dollar. I believe this is misleading; it gives a reasonably precise measure of the direct first-order effects, while ignoring the other complicated consequences of interventions. These other complicated effects include some negative and some positive ones, but they tend to significantly outweigh the first order effects. Instead of evaluating global poverty interventions by the magnitude of their first order effects, we should look at their indirect effects as well.

Let’s divide the effects into three categories: direct, medium-term, and long-term. We have the most certainty about direct effects, but long-term effects matter most.

Direct effects

Donating to global poverty1 has one important direct effect: the people you’re donating to are better off. In the case of GiveDirectly, this happens because they have more money; for deworming charities, people are better off because they don’t have worms anymore, which likely substantially improves their quality of life2. We have better evidence about these direct effects than about anything else, and when people talk about the benefits of GiveWell top charities, they almost exclusively talk about these effects.

Medium-term effects

Global poverty interventions have some indirect effects that are still easier to quantify and reason about than their long-term effects on economic or technological progress (which I discuss in “Long-term effects” below).

  1. Making people wealthier causes them to eat more factory-farmed animals.
  2. Making people wealthier causes them to consume more environmental resources, reducing wild animal populations.
  3. Making people wealthier worsens climate change, increasing existential risk.
  4. Global poverty interventions (probably?) decrease fertility.

Kyle Bogosian’s calculations suggest that increasing someone’s income in the developing world by $1000 causes about 15 days of land animal suffering, which probably means that a donation to GiveDirectly does more harm via creating factory farming than it does good via making a human financially better off. His calculations rely on uncertain estimates and I do not consider it robust, although it does draw on some hard data and it would probably be possible to substantially reduce the error bars on this estimate.

Additionally, making people wealthier causes them to consume more environmental resources. This has lots of effects, but the most direct one is that it reduces wild animal populations. If wild animal lives are net negative on balance, we should want to reduce the number of wild animals. I have not seen any adequate estimates of the effect size here so I cannot say how much it matters, but I suspect that it outweighs both the direct effect of making humans wealthier and the effect of increasing factory farming.

At the same time, if people consume more environmental resources, that contributes to climate change. This has an unclear effect on wild animal suffering but certainly increases the risk of extinction from severe runaway global warming.

Global poverty interventions probably decrease fertility. Unlike the other topics discussed in this section, lots of research has actually been done on this question, although I haven’t read it so I can’t make any confident claims about it. But what I know suggests that making people healthier and wealthier probably causes them to have fewer kids, which reduces population size after a generation. This could be good or bad for all of the many reasons that people existing could be good or bad. Most directly, this means that fewer people get to live decent lives, which is bad. But it also means more suffering on factory farms. Fewer people possibly means less converting wild habitats and therefore more wild animal suffering, but at the same time, wealthier people consume more environmental resources, and I’d expect the increased-wealth factor to matter more than the reduced-population factor (wealthy countries consume way more environmental resources than poor countries).

Finally, and perhaps most importantly, promoting global poverty attracts more interest in effective altruism and possibly has spillover effects into other cause areas. Many people get into effective altruism by learning about how much good they can do by donating to global poverty efforts, and later

Long-term effects

Improving people’s standard of living has lots of potential long-term effects. Of the plausible effects I’ve thought of, these look the most important3:

  1. It could increase international stability by reducing scarcity/competition.
  2. It could decrease international stability by adding more major actors, making coordination more difficult.
  3. It probably increases research on dangerous technology4.
  4. It probably increases research on beneficial technology.
  5. It becomes harder to make differential intellectual progress.
  6. It causes humans to consume more environmental resources.
  7. It could decrease international stability by making arms races more likely.

Increasing international stability could be beneficial or harmful. Greater international stability reduces the chance of war, which reduces existential risk from nuclear or biological weapons (or other existentially threatening weapons of war that have yet to be developed). In a more stable world, nations probably have an easier time creating cooperative agreements to avoid arms races. Additionally, increasing stability probably increases technological research, which could be beneficial or harmful.

Technological research both increases our ability to avert catastrophes and increases the potential danger arising from of powerful technologies. I suspect that improved technology generally increases the probability of an existential catastrophe. Right now our greatest existential threats arise from technologies developed by humans, and our technological development has done comparatively little to alleviate natural existential risks. I would expect this trend to continue.

If we increase global technological development, that probably makes it harder to make differential intellectual progress, which means that we have a more difficult time accelerating existential risk research to the point where it moves faster than research that increases existential risks. Helping poor people in countries like Malawi and Kenya probably doesn’t have much effect on technological development, but we should expect it to have at least small effects in the long run, at least in expectation.

Consuming more environmental resources could be either good or bad. As discussed before, it reduces habitats which reduces wild animal suffering, and this is a good thing if wild animals’ lives are net negative. (Although climate change could increase wild animal populations.) It probably reduces existential risk by reducing the chance of runaway global warming, which is probably but not definitely good.

Confidently claiming that global poverty has either good or bad long-term effects requires making conjunctive claims about uncertain effects: for global poverty reduction to be beneficial, it must increase international stability and this must reduce arms races and the chance of a major nuclear war and preventing human extinction must be good; or it must reduce international stability and this must slow technological development and it must be beneficial to slow technological development (and, again, preventing human extinction must be good); or it must increase climate change and this reduces wild animal populations without substantially increasing existential risk and wild animal lives are net negative; etc. Claiming that global poverty does long-term harm requires making similarly complex conjunctive arguments.

Although I cannot make any confident claims here, I lean toward expecting these effects, in order of confidence:

  1. Global poverty alleviation accelerates dangerous technology more rapidly than beneficial technology, which is bad.
  2. Global poverty alleviation increases international cooperation, which is good.
  3. Global poverty alleviation increases climate change, which is bad.

I tend to think that this first effect outweighs the second and I am more confident that it occurs, so I am weakly confident that making people wealthier has net negative effects in the long run.

Conclusion

Alleviating global poverty has second- and third-order effects, some of them good and some of them harmful; these effects are significant and hard to quantify. Without serious engagement with the long term effects of global wealth, we cannot confidently claim that global poverty interventions do net good or harm.

The evidence tends to point weakly toward the conclusion that making people wealthier does net harm. This makes me wary of supporting efforts to alleviate global poverty. On the other hand, the evidence is nowhere near strong enough to justify trying to prevent people from helping the global poor. Any arguments in this essay about the long-term effects of global poverty should be treated as highly speculative. Nonetheless, this suggests that we cannot confidently claim that global poverty prevention helps the world in the long run.

Notes

  1. When I talk about donating to global poverty, I’m mostly talking about GiveWell top charities, because that’s what readers primarily donate to. 

  2. For GiveWell’s other top recommended charity, the Against Malaria Foundation (AMF), people primarily tout its life-saving benefits, but I believe these are severely overstated. Still, AMF does prevent people from getting non-fatal cases of malaria which improves their quality of life, although I don’t know how valuable this is compared to cash transfers or deworming. 

  3. I’m sure I’m missing lots of important effects. I don’t believe people address these sorts of issues enough, so I expect there exist lots of significant long-term effects that haven’t been explored. 

  4. Developing countries currently have meager research output compared to wealthy nations, but this changes as countries become richer. 

Posted on

Why I'm Prioritizing Animal-Focused Values Spreading

Part of a series for My Cause Selection 2016. For background, see my writings on cause selection for 2015 and my series on quantitative models.

The last time I wrote about values spreading, I primarily listed reasons why we might expect existential risk reduction to have a greater impact. Now I’m going to look at why values spreading—and animal advocacy in particular—may look better.

When I developed a quantitative model for cause prioritization, the model claimed that effective animal advocacy has a greater expected impact than AI safety research. Let’s look at some qualitative reasons why the model produces this result:

  • Animal advocacy has lower variance—we’re more confident that it will do a lot of good, especially in the short to medium term.
  • Animal advocacy is more robustly positive—it seems unlikely to do lots of harm1, whereas the current focus of AI safety research could plausibly do harm. (This is really another way of saying that AI safety interventions have high variance.)
  • The effects of animal advocacy on the far future arguably have better feedback loops.
  • Animal advocacy is more robust against overconfidence in speculative arguments. I believe we ought to discount the arguments for AI safety somewhat because they rely on hard-to-measure claims about the future. We could similarly say that we shouldn’t be too confident what effect animal advocacy will have on the far future, but it also has immediate benefits. Some people put a lot of weight on this sort of argument; I don’t give it tons of weight, but I’m still wary given that people have a history of making overconfident claims about what the future will look like.
Continue reading
Posted on

Evaluation Frameworks (or: When Importance / Neglectedness / Tractability Doesn't Apply)

Let’s look at how we use frameworks to prioritize causes. We’ll start by looking at the commonly-used importance/neglectedness tractability framework and see why it often works well and why it doesn’t match reality. Then we’ll consider an alternative approach.

Importance/neglectedness/tractability framework

When people do high-level cause prioritization, they often use an importance/neglectedness/tractability framework where they assess causes along three dimensions:

  1. Importance: How big is the problem?
  2. Neglectedness: How much work is being done on the problem already?
  3. Tractability: Can we make progress on the problem?

This framework acts as a useful guide to cause prioritization. Let’s look at some of its benefits and problems.

Continue reading
Posted on

Quantifying the Far Future Effects of Interventions

Part of a series on quantitative models for cause selection.

Introduction

In the past I’ve written qualitatively about what sorts of interventions likely have the best far-future effects. But qualitative analysis is maybe not the best way to decide this sort of thing, so let’s build some quantitative models.

I have constructed a model of various interventions and put them in a spreadsheet. This essay describes how I came up with the formulas to estimate the value of each intervention and makes a rough attempt at estimating the inputs to the formulas. For each input, I give either a mean and σ1 or an 80% confidence interval (which can be converted into a mean and σ). Then I combine them to get a mean and σ for the estimated value of the intervention.

This essay acts as a supplement to my explanation of my quantitative model. The other post explains how the model works; this one goes into the nitty-gritty details of why I set up the inputs the way I did.

Note: All the confidence intervals here are rough first attempts and don’t represent my current best estimates; my main goal is to explain how I developed the presented series of models. I use dozens of different confidence intervals in this essay, so for the sake of time I have not revised them as I changed them. To see my up-to-date estimates, see my final model. I’m happy to hear things you think I should change, and I’ll edit my final model to incorporate feedback. And if you want to change the numbers, you can download the spreadsheet and mess around with it. This describes how to use the spreadsheet.

Continue reading
Posted on

A Complete Quantitative Model for Cause Selection

Part of a series on quantitative models for cause selection.

Update: There’s now a web app that can do everything the spreadsheet could and more.

Quantitative models offer a superior approach in determining which interventions to support. However, naive cost-effectiveness estimates have big problems. In particular:

  1. They don’t give stronger consideration to more robust estimates.
  2. They don’t always account for all relevant factors.

We can fix the first problem by starting from a prior distribution and updating based on evidence–more robust evidence will cause a bigger probability update. And we can fix the second problem by carefully considering the most important effects of interventions and developing a strong quantitative estimate that incorporates all of them.

So that’s what I did. I developed a quantitative model for comparing interventions and wrote a spreadsheet to implement it.

Continue reading
Posted on

GiveWell's Charity Recommendations Require Taking a Controversial Stance on Population Ethics

Update 2016-12-14: GiveWell’s 2016 cost-effectiveness analysis has updated the way it handles population ethics. It now explicitly takes the value of saving a 5-year old’s life as input and no longer assumes that it’s worth 36 life-years.

Update 2018-08-14: I recently revisited GiveWell’s 2018 cost-effectiveness analysis. Although the analysis spreadsheet no longer enforces the “GiveWell view” described in this essay, most GiveWell employees still implicitly adopt it. As a result, I believe GiveWell is still substantially mis-estimating the cost-effectiveness of the Against Malaria Foundation.

GiveWell claims that the Against Malaria Foundation (AMF) is about 10 times as cost-effective as GiveDirectly. This entails unusual claims about population ethics that I believe many people would reject, and according to other plausible views of population ethics, AMF looks less cost-effective than the other GiveWell top charities.

GiveWell’s Implicit Assumptions

A GiveWell-commissioned report suggests that population will hardly change as a result of AMF saving lives. GiveWell’s cost-effectiveness model for AMF assumes that saving one life creates about 35 quality-adjusted life years (QALYs), and uses this to assign a quantitative value to the benefits of saving a life. But if AMF causes populations to decline, that means it’s actually removing (human) QALYs from the world; so you can’t justify AMF’s purported cost-effectiveness by saying it creates more happy human life, because it doesn’t.

You could instead justify AMF’s life-saving effects by saying it’s inherently good to save a life, in which case GiveWell’s cost-effectiveness model shouldn’t interpret the value of lives saved in terms of QALYs created/destroyed, and should include a term for the inherent value of saving a life.

GiveWell claims that AMF is about 10 times more cost-effective than GiveDirectly, and GiveWell ranks AMF as its top charity partially on this basis (see “Summary of key considerations for top charities” in the linked article). This claim depends on the assumption that saving a life creates 35 QALYs.

To justify GiveWell’s cost-effectiveness analysis, you could say that it is good to cause existing people to live longer, but it is not bad to prevent people from existing. (Sean Conley of GiveWell says he and many other GiveWell staffers believe this.)

In particular, you’d have to assume that:

Continue reading
Posted on

On Priors

Part of a series on quantitative models for cause selection.

Introduction

One major reason that effective altruists disagree about which causes to support is that they have different opinions on how strong an evidence base an intervention should have. Previously, I wrote about how we can build a formal model to calculate expected value estimates for interventions. You start with a prior belief about how effective interventions tend to be, and then adjust your naive cost-effectiveness estimates based on the strength of the evidence behind them. If an intervention has stronger evidence behind it, you can be more confident that it’s better than your prior estimate.

For a model like this to be effective, we need to choose a good prior belief. We start with a prior probability distribution P where P(x) gives the probability that a randomly chosen intervention1 has utility x (for whatever metric of utility we’re using, e.g. lives saved). To determine the posterior expected value of an intervention, we combine this prior distribution with our evidence about how much good the distribution does.

For this to work, we need to know what the prior distribution looks like. In this essay, I attempt to determine what shape the prior distribution has and then estimate the values of its parameters.

Continue reading
Posted on

How Should a Large Donor Prioritize Cause Areas?

Introduction

The Open Philanthropy Project has made some grants that look substantially less impactful than some of its others, and some people have questioned the choice. I want to discuss some reasons why these sorts of grants might plausibly be a good idea, and why I ultimately disagree.

I believe Open Phil’s grants on criminal justice and land use reform are much less effective in expectation1 than its grants on animal advocacy and global catastrophic risks. This would naively suggest that Open Phil should spend all its resources on these more effective causes, and none on the less effective ones. (Alternatively, if you believe that the grants on US policy do much more good than the grants on global catastrophic risk, then perhaps Open Phil should focus exclusively on the former.) There are some reasons to question this, but I believe that the naive approach is correct in the end.

Why give grants in cause areas that look much less effective than others? Why give grants to lots of cause areas rather than just a few? Let’s look at some possible explanations for these questions.

Continue reading
Posted on

Preventing Human Extinction, Now With Numbers!

Part of a series on quantitative models for cause selection.

Introduction

Last time, I wrote about the most likely far future scenarios and how good they would probably be. But my last post wasn’t precise enough, so I’m updating it to present more quantitative evidence.

Particularly for determining the value of existential risk reduction, we need to approximate the probability of various far future scenarios to estimate how good the far future will be.

I’m going to ignore unknowns here–they obviously exist but I don’t know what they’ll look like (you know, because they’re unknowns), so I’ll assume they don’t change significantly the outcome in expectation.

Here are the scenarios I listed before and estimates of their likelihood, conditional on non-extinction:

*not mutually exclusive events

(Kind of hard to read; sorry, but I spent two hours trying to get flowcharts to work so this is gonna have to do. You can see the full-size image here or by clicking on the image.)

I explain my reasoning on how I arrived at these probabilities in my previous post. I didn’t explicitly give my probability estimates, but I explained most of the reasoning that led to the estimates I share here.

Some of the calculations I use make certain controversial assumptions about the moral value of non-human animals or computer simulations. I feel comfortable making these assumptions because I believe they are well-founded. At the same time, I recognize that a lot of people disagree, and if you use your own numbers in these calculations, you might get substantially different results.

Continue reading
Posted on

Expected Value Estimates You Can (Maybe) Take Literally

Part of a series on quantitative models for cause selection.

Alternate title: Excessive Pessimism About Far Future Causes

In my post on cause selection, I wrote that I was roughly indifferent between $1 to MIRI, $5 to The Humane League (THL), and $10 to AMF. I based my estimate for THL on the evidence and cost-effectiveness estimates for veg ads and leafleting. Our best estimates suggested that these are conservatively 10 times as cost-effective as malaria nets, but the evidence was fairly weak. Based on intuition, I decided to adjust this 10x difference down to 2x, but I didn’t have a strong justification for the choice.

Corporate outreach has a lower burden of proof (the causal chain is much clearer) and estimates suggest that it may be ten times more effective than ACE top charities’ aggregate activities1. So does that mean I should be indifferent between $5 to ACE top charities and $0.50 to corporate campaigns? Or perhaps even less, because the evidence for corporate campaigns is stronger? But I wouldn’t expect this 10x difference to make corporate campaigns look better than AI safety, so I can’t say both that corporate campaigns are ten times better than ACE top charities and also that AI safety is only five times better. My previous model, in which I took expected value estimates and adjusted them based on my intuition, was clearly inadequate. How do I resolve this? In general, how can we quantify the value of robust, moderately cost effective interventions against non-robust but (ostensibly) highly cost effective interventions?

To answer that question, we have to get more abstract.

Continue reading
Posted on

Page 6 of 8