The Myth that Reducing Wild Animal Suffering Is Intractable

Lots of people accept that wild animal suffering is a big problem, but they believe it’s completely intractable. I even see some people claim that it’s one of the biggest problems in the world, but we still shouldn’t try to do anything about it. Wild animal suffering is in fact much more tractable than most people believe.

If we think wild animal suffering is a pressing problem and we want to do something about it, what can we do?

Continue reading
Posted on

Preventing Human Extinction, Now With Numbers!

Part of a series on quantitative models for cause selection.

Introduction

Last time, I wrote about the most likely far future scenarios and how good they would probably be. But my last post wasn’t precise enough, so I’m updating it to present more quantitative evidence.

Particularly for determining the value of existential risk reduction, we need to approximate the probability of various far future scenarios to estimate how good the far future will be.

I’m going to ignore unknowns here–they obviously exist but I don’t know what they’ll look like (you know, because they’re unknowns), so I’ll assume they don’t change significantly the outcome in expectation.

Here are the scenarios I listed before and estimates of their likelihood, conditional on non-extinction:

*not mutually exclusive events

(Kind of hard to read; sorry, but I spent two hours trying to get flowcharts to work so this is gonna have to do. You can see the full-size image here or by clicking on the image.)

I explain my reasoning on how I arrived at these probabilities in my previous post. I didn’t explicitly give my probability estimates, but I explained most of the reasoning that led to the estimates I share here.

Some of the calculations I use make certain controversial assumptions about the moral value of non-human animals or computer simulations. I feel comfortable making these assumptions because I believe they are well-founded. At the same time, I recognize that a lot of people disagree, and if you use your own numbers in these calculations, you might get substantially different results.

Continue reading
Posted on

Expected Value Estimates You Can (Maybe) Take Literally

Part of a series on quantitative models for cause selection.

Alternate title: Excessive Pessimism About Far Future Causes

In my post on cause selection, I wrote that I was roughly indifferent between $1 to MIRI, $5 to The Humane League (THL), and $10 to AMF. I based my estimate for THL on the evidence and cost-effectiveness estimates for veg ads and leafleting. Our best estimates suggested that these are conservatively 10 times as cost-effective as malaria nets, but the evidence was fairly weak. Based on intuition, I decided to adjust this 10x difference down to 2x, but I didn’t have a strong justification for the choice.

Corporate outreach has a lower burden of proof (the causal chain is much clearer) and estimates suggest that it may be ten times more effective than ACE top charities’ aggregate activities1. So does that mean I should be indifferent between $5 to ACE top charities and $0.50 to corporate campaigns? Or perhaps even less, because the evidence for corporate campaigns is stronger? But I wouldn’t expect this 10x difference to make corporate campaigns look better than AI safety, so I can’t say both that corporate campaigns are ten times better than ACE top charities and also that AI safety is only five times better. My previous model, in which I took expected value estimates and adjusted them based on my intuition, was clearly inadequate. How do I resolve this? In general, how can we quantify the value of robust, moderately cost effective interventions against non-robust but (ostensibly) highly cost effective interventions?

To answer that question, we have to get more abstract.

Continue reading
Posted on

What Would Change My Mind About Where to Donate

If I’m wrong about anything, I want you to change my mind. I want to make that as easy as possible, so I’m going to give a list of charities/interventions and say what would convince me to support each of them.

Please try to change my mind! I prefer public discussions so the best thing to do is to comment on this post or on Facebook, but if you want to talk to me privately you can email me or message me on Facebook.

My Current Position

I discuss how I got to my current position here. Here’s a quick summary:

Continue reading
Posted on

How Valuable Are GiveWell Research Analysts?

Update 2016-05-18: I no longer entirely agree with this post. In particular, I believe GiveWell employees are more replaceable than this post suggests. I may write about my updated beliefs in the future.

Edited 2016-03-11 because I’ve adjusted my estimate of the value of global poverty charities downward, which makes working at GiveWell look worse.

Edited 2016-03-11 to add a new section.

Edited 2016-02-16 to update the model based on feedback I’ve received. Temporal replaceability doesn’t apply so I was underestimating the value of research analysts.

Summary: The value of working as a research analyst1 at GiveWell is determined by:

  • Temporal replaceability of employees
  • How good you are relative to the counterfactual employee
  • How much good GiveWell money moved does relative to where you could donate earnings
    • A lot if you care most about global poverty, not as much if you care about other cause areas
  • How directly more employees translate into better recommendations and more money moved
    • This relationship looks strong for Open Phil and weak for GiveWell Classic

If you believe GiveWell top charities are the best place to donate, working at GiveWell is probably a really strong career option; if you believe other charities are substantially better (as I do) and you have good earning potential, earning to give is probably better.

Continue reading
Posted on

Feedback Loops for Values Spreading

I recently wrote about values spreading, and came out weakly in favor of focusing on global catastrophic risks over values spreading. However, I neglected an important consideration in favor of values spreading: feedback loops.

When we try to take actions that will benefit the long-term future but where we don’t get immediate feedback on our actions, it’s easy to end up taking actions that do nothing to achieve our goals. For instance, it is surprisingly difficult to predict in advance how effective a social intervention will be. This gives reason to be skeptical about the effectiveness of interventions with long feedback loops.

Interventions on global catastrophic risks have really, really bad feedback loops. It’s nearly impossible to tell if anything we do reduces the risk of a global pandemic or unfriendly AI. An intervention focused on spreading good values is substantially easier to test. An organization like Animal Ethics can produce immediate, measurable changes in people’s values. Measuring these changes is difficult, and evidence for the effectiveness of advocacy is a lot weaker than the evidence for, say, insecticide-treated bednets to prevent malaria. But short-term values spreading still has an advantage over GCR reduction in that it’s measurable in principle.

Still, will measurable short-term changes in values result in sustainable long-term changes? That’s a harder question to answer. It certainly seems plausible that values shifts today will lead to shifts in the long term; but, as mentioned above, interventions that sound plausible frequently turn out not to work. Values spreading may not actually have a stronger case here than GCR reduction.

We can find feedback loops on GCR reduction that measure proxy variables. This is particularly easy in the case of climate change, where we can measure whether an intervention reduces greenhouse gas levels in the atmosphere. But we can also find feedback loops for something like AI safety research: we might say MIRI is more successful if it publishes more technical papers. This is not a particularly direct metric of whether MIRI is reducing AI risk, but it’s still a place where we can get quick feedback.

Given that short-term value shifts don’t necessarily predict long-term shifts, and that we can measure proxy variables for global catastrophic risk reduction, it’s non-obvious that values spreading has better feedback loops than GCR reduction. There does seem to be some sense in which value shifts today and value shifts in a thousand years are more strongly linked than, say, number of AI risk papers published and a reduction in AI risk; although this might just be because both involve value shifts–they may not actually be that strongly tied, or tied at all.

Values spreading appears to have the advantage of short-term feedback loops. But it’s not clear that these changes have long-term effects, and this claim isn’t any easier to test than the claim that GCR work today reduces global catastrophic risk.

Posted on

More on REG's Room for More Funding

I have received some interest from a few people in donating to REG, and the main concern I’ve heard has been about whether REG could effectively use additional funding. I spent some more time learning about this. My broad conclusion is roughly the same as I wrote previously: REG can probably make good use of an additional $100,000 or so, and perhaps more but with less confidence.

Poker Market Saturation

Tobias from REG claims that about 70% of high-earning poker players have heard of REG, although many of those have had only limited engagement. He claims that they have had the most success convincing players to join through personal contact, and REG has not had contact with many of the players who have heard of it. This gives some reason to be optimistic that REG can expand substantially among high-earning poker players, although I would not be surprised if it started hitting rapidly diminishing returns once it grows to about 2x its current size.

To date, REG has not spent much effort on marketing to non-high-earning poker players. This field is much larger, but targeting lower-earning players should be less efficient because each individual player donates less money. To get a better sense of how important this is, I would have to know what the income distribution looks like for poker players, and getting this information is nontrivial.

REG would like to hire a new marketing person with experience in the poker world. They would probably be considerably better at marketing than any of the current REG employees. For this reason, additional funds to REG may actually be more effective than past funds, although this is difficult to predict in advance.

Continue reading
Posted on

Response to the Global Priorities Project on Human and Animal Interventions

Owen Cotton-Barratt of the Global Priorities Project wrote an article on comparing human and animal interventions. His major conclusions include:

  1. Indirect long-term effects dominate considerations.
  2. Changing behavior of far-future humans matters more than alleviating immediate animal suffering.
  3. Helping humans has better flow-through effects than helping non-human animals.

The analysis effectively concludes that helping humans is more important than helping non-human animals but I believe it misses a few important considerations.

(These are fairly quick thoughts about which I have a lot of uncertainty; I’m publishing them here for the sake of making the conversation public.)

Continue reading
Posted on

Cause Prioritization Research I Would Like to See

Here are some research topics on cause prioritization that look important and neglected, in no particular order.

  1. Look at historical examples of speculative causes (especially ones that were meant to affect the long-ish-term future) that succeeded or failed and examine why.
  2. Try to determine how well picking winning companies translates to picking winning charities.
  3. In line with 2, consider if there exist simple strategies analogous to value investing that can find good charities.
  4. Find plausibly effective biosecurity charities.
  5. Develop a rigorous model for comparing the value of existential risk reduction to values spreading.
  6. Perform basic analyses of lots of EA-neglected or weird cause areas (e.g. depression, argument mapping, increasing savings, personal productivity–see here) and identify which ones look most promising.
  7. Reason about the expected value of the far future.
  8. Investigate neglected x-risk and meta charities (FHI, CSER, GPP, etc.).
  9. Reason about expected value estimates in general. How accurate are they? Do they tend to be overconfident? How overconfident? Do some things predictably make them more reliable?
Continue reading
Posted on

Page 3 of 6