What Would Change My Mind About Where to Donate
If I’m wrong about anything, I want you to change my mind. I want to make that as easy as possible, so I’m going to give a list of charities/interventions and say what would convince me to support each of them.
Please try to change my mind! I prefer public discussions so the best thing to do is to comment on this post or on Facebook, but if you want to talk to me privately you can email me or message me on Facebook.
My Current Position
I discuss how I got to my current position here. Here’s a quick summary:
- Far future considerations probably dominate.
- The most important ways to affect the far future are global catastrophic risk (GCR) reduction and values spreading
- GCR reduction looks weakly more important.
- Among GCRs, AI risk and biosecurity look the most important.
- Of these, it looks like marginal funding in AI risk will do more.
- Of AI risk organizations, MIRI looks like the most important to fund.
- MIRI is probably not better than other top charities by a wide margin, so Raising for Effective Giving (REG)’s fundraising multiplier makes it an even better option.
(The landscape has changed substantially since I last donated and I will definitely investigate these questions further before donating again, so consider this reasoning tentative.)
Here’s a list of promising causes I could support. They’re ordered roughly by how likely I think I am to change my mind. Even for the ones near the bottom, there’s a non-negligible probability that I will donate to them next year.
A note on how these are organized: for each category I list one or more bullet points, where each bullet point gives something that would update me. Some bullet points contain ANDs; for those, all the parts have to be true for me to fully change my mind.
REG/MIRI
My current two favorite charities are REG and MIRI; here’s what would push me toward REG and away from MIRI.
-
(1) There’s good reason to believe that REG will maintain a high fundratio,
- AND (2) REG has room for more funding.
-
(3) MIRI is not substantially better than other charities REG raises money for.
An AI risk org other than MIRI
-
(1) It has room for more funding,
-
AND (2) it produces important research that’s higher quality than MIRI
- OR (3) it does important non-research-related AI work1,
-
AND (4) it’s unlikely to do things that harm non-human animals.
-
The last time I donated, I chose not to seriously consider AI risk orgs other than MIRI because I wasn’t confident that they had room for more funding. There are a few orgs that I know very little about and I plan on learning more about them before donating again, but my current second-favorite is the Future of Humanity Institute (FHI). There are a couple of important reasons why I prefer MIRI to FHI even without considering room for more funding. First, FHI’s publications seem less relevant or important. I’d like to learn more about how FHI prioritizes research, but on the surface it looks less impactful overall than what MIRI does. Second, I’m not confident that FHI will make decisions that are good for non-human animals (I explain why this matters here and here). MIRI isn’t as good here as I would like, but it’s definitely better than FHI in this respect.
The main point in favor of FHI over MIRI is that it has more academic credibility and its researchers have stronger credentials, although I believe this is outweighed by the fact that MIRI’s publications appear to cover more important topics. FHI has more publications but it also has more researchers, and its publication-per-researcher rate looks about the same.
GCR org other than AI risk
-
(1) There is a good org working on a GCR with room for more funding,
- AND (2) there are good reasons to believe that this GCR is more important than AI risk.
(2) is tricky because AI risk looks extra scary. It’s the only (known) GCR that has the potential to entirely reshape the universe, not just Earth, and I’m weakly confident that an unfriendly AI spreading itself across the universe is worse than just destroying Earth. This is non-obvious so I’ll briefly explain why.
Right now, life on earth is probably net negative thanks to wild animal suffering, although this probably won’t be true in the long term. If we expect humans to eventually create circumstances that alleviate wild animal suffering, we should presumably expect intelligent life on other planets2 to do something similar. Human goals are a product of natural selection, and the same would be true of intelligent life elsewhere. It’s reasonable to expect that any sort of technologically advanced intelligent species would want to use available resources to achieve its goals rather than letting other species have those resources.
This argument is pretty tenuous and it probably wouldn’t be that hard to convince me that it’s wrong.
Animal Charity Evaluators (ACE)
- (1) The evidence for any GCR org is too weak.
-
(2) Values spreading looks more important than GCRs,
-
AND (3) animal charities (broadly construed) look like the most promising form of values spreading,
-
AND (4) it seems likely that additional research would discover substantially better values spreading orgs,
-
AND (5) ACE seems likely to produce such research.
-
That’s a lot of conjunctions, but I still consider myself pretty likely to change my mind here because I believe (3), (4), and (5) are probably true.
Some other weird values spreading charity
-
(1) Values spreading looks more important than GCRs,
-
AND (2) such an organization actually exists (this one is kind of important),
-
AND (3) it looks effective and has room for more funding.
-
The only charities I know of that might sort of fit this category are Animal Ethics and The Non-Human Rights Project.
Org doing important non-GCR research
This includes orgs like the Foundational Research Institute or the Global Priorities Project.
-
(1) There’s good reason to believe that its research is more important than GCR research,
- AND (2) its research is high-quality and it has room for more funding.
ACE top charity
- (1) The evidence for more speculative interventions (i.e. everything above here) is too weak.
-
(2) Values spreading is more important than GCRs,
-
AND (3) none of the more speculative orgs look more promising than ACE top charities,
-
AND (4) it doesn’t seem likely that investing in ACE research will find substantially better interventions.
-
Right now if I had to choose between ACE top charities and GiveWell top charities, I weakly prefer ACE top charities because they look much higher impact. But the evidence in their favor is a lot weaker and I wouldn’t be that surprised if they turned out not to do much good. I wasn’t bullish on animal charities a few months ago, but I’ve recently learned more about corporate campaigns which seem more robustly beneficial than leafleting or advertising, so I’ve updated toward animal charities being better than global poverty. Plus ACE’s research in 2015 looks higher quality than in previous years.
Fundraising charity other than REG
-
(1) The best interventions aren’t much better than global poverty,
- AND (2) a fundraising charity has a strong fundratio and room for more funding.
-
(3) A fundraising charity other than REG robustly raises a lot of money for animal or GCR charities.
The section on “GiveWell top charity” gives some reasons why I might update toward global poverty being particularly important.
I say “other than REG” here because a lot of REG’s raised money goes toward charities that I think are substantially more effective than global poverty, but this isn’t the case for any other fundraising orgs I know of.
GiveWell top charity
-
(1) GiveWell top charities are genuinely robustly positive,
-
AND (2) the far future isn’t that important relative to short-term effects,
-
AND (3) the evidence for animal charities is too weak, or they’re a lot worse than current evidence suggests.
-
-
(4) GiveWell top charities have really big positive effects on the far future.
-
(5) There’s some important reason (e.g. signaling) to donate to GiveWell top charities.
I would be really surprised if I updated to believe (4) or (5). Right now I don’t buy the robustness argument for GiveWell top charities because I don’t think they’re robustly net positive except in the very short term; but I know some smart people who disagree with me, and I don’t really know why they disagree with me, so I wouldn’t be that surprised if I changed my mind here.
Technically (5) could be a consideration in favor of any charity, but it seems more likely to be true for GiveWell top charities.
EA movement-building org (non-fundraising)
-
(1) There’s good evidence that average EAs are strongly positive,
-
AND (2) good evidence that a movement-building org is successful,
-
AND (3) movement building has a large enough benefit that it’s better than directly supporting the best object-level charity or fundraising charity.
-
Anything else
I can’t think of anything plausible that would convince me to donate somewhere else, but there are unknown unknowns so I certainly wouldn’t rule this out.
Notes
-
Thanks to the commenters who pointed out this possibility. ↩
-
It’s pretty likely that intelligent life exists on other planets. The universe is really big, and we’re probably not that special. Most intelligent life is probably not sufficiently technologically advanced to capture most of a planet’s resources, but again, the universe is really big. ↩