Cross-posted to the EA Forum. If you want to leave a comment, you can post it there.
Last edited 2015-09-24.
In this essay, I provide my reasoning about the arguments for and against different causes and try to identify which one does the most good. I give some general considerations on cause selection and then lay out a list of causes followed by a list of organizations. I break up considerations on these causes and organizations into five categories: Size of Impact; Strength of Evidence; Tractability; Neglectedness/Room for More Funding; Learning Value. This roughly mirrors the traditional Importance; Tractability; Neglectedness criteria. I identify which cause areas look most promising. Then I examine a list of organizations working in these cause areas and narrow down to a few finalists. In the last section, I directly compare these finalists against each other and identify which organization looks strongest.
You can skip to Conclusions to see summaries of why I prioritize the finalists I chose, why I did not consider any of the other charities as finalists, and my decision about who to fund.
I chose these three finalists:
- Machine Intelligence Research Institute (MIRI) makes the strongest case for an extraordinarily high impact.
- Animal Charity Evaluators (ACE) makes a strong case for high impact and the strongest case for high learning value.
- Raising for Effective Giving (REG) produces a donation multiplier1 for both MIRI and ACE, so it potentially has an even larger impact.
Based on everything I considered, REG looks like the strongest charity because it produces a large donation multiplier and it directs donations to both MIRI and ACE (as well as other effective charities).
- General Considerations
- Global Poverty
- Factory Farming
- Far Future (General)
- Values Spreading to Improve the Far Future
- Global Catastrophic Risk Reduction
- AI Safety
- GCRs Other than AI Safety
- Movement Building
- Machine Intelligence Research Institute (MIRI)
- Future for Humanity Institute (FHI) and Centre for the Study of Existential Risk (CSER)
- Future of Life Institute
- Open Phil-Recommended GCR Charities
- GiveWell/Open Philanthropy Project
- Animal Charity Evaluators (ACE)
- Animal Ethics (AE) and Foundational Research Institute (FRI)
- Giving What We Can (GWWC)
- Charity Science
- Raising for Effective Giving (REG)
- Other Organizations
Purpose of This Document
To date, my thinking on cause prioritization has been insufficiently organized or rigorous. This is an attempt to lay out all the considerations in my head for and against different causes and organizations and get some clarity about who to support.
This is originally inspired by conversations with Buck Shlegeris about the importance of cause prioritization, which he makes a good case for here:
(Buck makes some non-obvious claims here but I agree with the main thesis that we should spend more effort on cause prioritization.)
EAs spend a tenth as much time discussing cause prioritization as they should. Cause prioritization is obviously incredibly important. If given perfect information you could know that you should be donating to [cause area 1] and you’re actually donating to [cause area 2], then you are doing probably at least an order of magnitude less good than you could be, and I’m only even granting you that much credit because donating to EA charities in [cause area 1] might raise the profile of EA and get more people to donate to [cause area 2] in the future.
If EAs were really interested in doing as much good as they could, then they would want to put their beliefs about cause prioritization under incredible scrutiny. I’m earning to give this year, and I plan to give about 25% of my income. If I could spend a month of my year full time researching cause prioritization, and I thought I was 80% likely to be right about my current cause area, and I thought that this had a 50% chance of changing my cause area from my current cause area to a better one if I were wrong about cause prioritization right now, then it would be worth it for me to do that. […]
If EAs wanted to help others, they would all maintain a written list of all the strongest arguments against their cause areas from other EAs, and they’d all have their list of rebuttals. Ideally, I’d be able to write a really good document on cause prioritization and sell it for $100, because it would save other EAs so much time figuring this out themselves.
What I Value
I value having enjoyable experiences and avoiding unpleasant experiences. If I value these experiences for myself, then it’s reasonable for me to value them in general. That’s the two-sentence version of why I’m a hedonistic utilitarian.
I have a few more specific beliefs which I believe follow from hedonistic utilitarianism, but a lot of people disagree with, so they are worth stating explicitly:
- Pleasurable and painful experiences in non-humans have moral value. Non-humans includes non-human animals, computer simulations of sentient beings, artificial biological beings, and anything else that can experience pleasure and suffering.
- Future beings have equal moral status to existing beings. We might discount them by their probability of existence, but we should not discount them solely because they exist in the future.
- The best possible outcome would be to use fill the universe with beings that experience as much joy as possible for their entire lives. I refer to this outcome as a hedonium shockwave (or utilitronium shockwave).
- I have heard only one reasonably compelling argument that this is not the best possible outcome. It may be the case that giving two beings the exact same experience has no more moral value than one being having the experience. I do not see why this would be true, but I am sufficiently confused about this that I do not put much credence in my intuitions. If this claim is true, then we should fill the universe with beings that are optimized for both happiness and diversity of experiences rather than just for happiness. I’m confused about this but hopefully it’s possible to make a good decision about cause prioritization without solving the hard problem of consciousness.
I am not perfectly confident that hedonistic utilitarianism is true–I have some normative uncertainty. At the same time, I do not know what it would mean for hedonistic utilitarianism to be false (I don’t see how suffering could not be inherently bad, and I don’t see how anything other than suffering could be inherently bad). I am open to arguments that it is false, but I am unpersuaded by arguments of the form “utilitarianism produces an unintuitive result in this contrived thought experiment,” and almost all arguments take this form.
This document is not optimized to be easy to read for people who aren’t familiar with popular effective altruist causes and organizations and has a lot of jargon and abbreviations. That said, I want people to be able to understand what I’m talking about, so I’m happy to offer clarification on specific terms or concepts in the comments section.
Although I try to be as cause-neutral as possible, I feel some emotions that push me in the direction of one cause or another. Throughout this document I try to acknowledge any such feelings. This opens me to criticism along the lines of, “Your arguments for this cause are strong but you are emotionally biased against it; you should consider it more carefully.”
My Writing Process
I wrote this document over time as I researched different causes and organizations. I generally speak about choosing a charity in the future tense, because when I wrote most of this, I had not yet chosen where to donate. While reading this, imagine you are exploring the ideas with me, moving through all the major considerations and reaching a decision near the end of the document.
I found the process of writing this extremely valuable. I quickly identified which fundamental questions I needed to answer, and I wrote separate essays to answer a couple of important fundamental questions. Writing this document clarified for me what issues I need to think about when choosing a cause, and I learned a lot about what different organizations are doing and the arguments for and against them. Writing down your mental models is helpful for clarifying them and examining them from a distance.
I spent about 100 hours producing this document and ultimately changed my mind about where to donate. Even making conservative assumptions about how much I will donate and how much better my choice is now than it would have been, this time spent was worth over $100/hour (and I suspect it’s probably worth more like $500/hour). That said, I found this time enjoyable and probably wouldn’t have put in nearly as much work if it hadn’t been fun. For anyone else who finds this sort of work fun, I strongly encourage you to do it and publish your results in detail.
I had a few ideas that emerged as a result of working on cause prioritization, and I wrote them as separate essays:
In Is Preventing Human Extinction Good?, I examine the likely effects of the long-term survival of humanity and consider whether they are good or bad. I come out in favor of preventing extinction being good in expectation, and more likely good than bad.
In On Values Spreading, I discuss the value of values spreading and come to the (weak) conclusion that preventing global catastrophic risks looks more important.
In Charities I Would Like to See, I propose a few ideas for potentially high-impact interventions.
Things I Still Don’t Understand
I still have a few areas of uncertainty. I took a position on these questions, but I have weak confidence about my position. I’d like to see more work in these areas and will continue to think about them.
- Could high-leverage values spreading be more important than global catastrophic risk reduction?
- Can we actually have success with far-future interventions when we don’t have good feedback loops?
- How much evidence should I require before supporting a speculative cause?
- When is supporting a meta-charity not worth it?
I have many people to thank for helping me produce this document.
Thanks to Nick Beckstead, Daniel Dewey, Ruairi Donnelly, Eric Herboso, Victoria Krakovna, Howie Lempel, Toby Ord, Tobias Pulver, Joey Savoie, Carl Shulman, Nate Soares, Pablo Stafforini, Brian Tomasik, Emily Cutts Worthington, and Eliezer Yudkowsky for answering my questions about their work and discussing ideas with me.
Thanks to Jacy Anthis, Linda Dickens, Jeff Jordan, Kelsey Piper, Buck Shlegeris, and Claire Zabel for reviewing drafts and helping me develop my thoughts on cause selection.
If I inadvertently left out anyone else, then I apologize, and thanks to you, too.
Among global poverty charities, the Against Malaria Foundation (AMF) probably has the strongest case that it saves lives effectively. There’s strong evidence that it helps humans in the short run, but I have some concerns about its larger effects. Does AMF (or other global poverty charities) negatively impact wild animals? Does making humans better off hurt the far future? My best guess on both these questions is “no,” but I have a lot of uncertainty about them, so the case for AMF is not as clear-cut as it first appears. That said, if every potentially high-impact but more speculative cause that I consider has insufficient evidence that it’s effective, I may donate to AMF. I consider this something of a fallback position: AMF is the strongest charity unless another charity can show that it’s better.
I do not discuss global poverty charities in depth here because I do not believe I have much to add to GiveWell’s extensive analysis.
Some charities such as The Humane League work to prevent animals from suffering on factory farms. There’s a plausible case that some such charities do much more good than GiveWell top charities (perhaps by an order of magnitude or more), although the supporting evidence here is much weaker.
Similarly to global poverty, reducing factory farming may be net harmful in the short run. Reducing factory farming might reduce speciesism and spread good values in the long term, but this claim is highly speculative. Then it’s not a question of comparing speculative far-future causes versus proven factory farming: the case for the long-term benefits of reducing factory farming is on no firmer ground than the case for global catastrophic risk charities. I discuss values spreading as a separate cause below.
Charities against factory farming do not serve as a fallback position in the same way that GiveWell top charities do, because the evidence in their favor is a lot weaker. The state of this evidence is improving, and funding studies on animal advocacy could be highly effective; see my discussion of Animal Charity Evaluators.
Far Future (General)
Almost all utility lives in the far future. Thus, it’s likely that the most effective interventions are ones that positively affect the far future. But this line of reasoning has a major problem: it’s not at all obvious how to positively affect the far future. Some, such as Jeff Kaufman, believe this is sufficient reason to focus on short-term interventions instead.
Short-term interventions such as direct cash transfers will always have stronger evidence in their favor than far future interventions. But the far future is so overwhelmingly important that I believe our best bet is to support far future causes whenever we can find charities with reasonably good indicators of their effectiveness (e.g. success at achieving short-term goals or competent leadership). It’s conceivable that we won’t be able to find any sufficiently reliable charities (and this was my impression when I first investigated the issue a few years ago), but it’s worth trying.
I used to prefer short-term interventions with clear supporting evidence–I supported GiveWell top charities and, later, ACE top charities and ACE itself. But after a few conversations with Carl Shulman, Pablo Stafforini, Buck Shlegeris and others, I started more seriously considering the fact that almost all value lives in the far future, and the best interventions are probably those that focus on it. This is not to say that we should donate to whatever charity can give a naive argument that it has the highest expected value. When I discuss specific far-future charities below, I look for indicators on whether their activities are effective. I would not give to a far-future charity unless it had compelling evidence that its activities would be impactful. Obviously this evidence will not be as strong as the evidence in favor of GiveWell top charities, but there still exist far-future charities with better and worse evidence of impact.
We should be cautious about being lenient with a cause area’s strength of evidence. Jeff Kaufman explains:
People succeed when they have good feedback loops. Otherwise they tend to go in random directions. This is a problem for charity in general, because we’re buying things for others instead of for ourselves. If I buy something and it’s no good I can complain to the shop, buy from a different shop, or give them a bad review. If I buy you something and it’s no good, your options are much more limited. Perhaps it failed to arrive but you never even knew you were supposed to get it? Or it arrived and was much smaller than I intended, but how do you know. Even if you do know that what you got is wrong, chances are you’re not really in a position to have your concerns taken seriously.
[With AI risk, the] problem is we really really don’t know how to make good feedback loops here. We can theorize that an AI needs certain properties not to just kill us all, and that in order to have those properties it would be useful to have certain theorems proved, and go work on those theorems. And maybe we have some success at this, and the mathematical community thinks highly of us instead of dismissing our work. But if our reasoning about what math would be useful is off there’s no way for us to find out. Everything will still seem like it’s going well.
AI risk and other speculative causes don’t have good feedback loops, but we don’t know nothing about whether we’re succeeding. And there’s reason to believe we should support speculative causes. As Nick Beckstead writes:
My overall impression is that the average impact of people doing the most promising unproven activities contributed to a large share of the innovations and scientific breakthroughs that have made the world so much better than it was hundreds of years ago, despite the fact that they were a small share of all human activity.
The best interventions are probably those that significantly affect the far future, although probably many (or even most) far-future interventions do nothing useful. We should try to improve the far future, but be careful about naive claims of high cost-effectiveness and look for indicators that far-future charities are competent.
Values Spreading to Improve the Far Future
Some people propose focusing on spreading values now to increase the probability that the far future has beneficial results. I have never seen any strong reason to believe that anything we do now will affect far future values–if the case for organizations reducing global catastrophic risks is tenuous, then the case for values spreading is no better.
In Vegan Advocacy and Pessimism about Wild Animal Welfare, Carl Shulman points out that vegan advocacy could be bad for animals in the short-run (although Brian Tomasik believes it has positive short-run effects), so the main benefit comes from values spreading; but the benefits of values spreading are unclear.
I discuss this subject in more depth in On Values Spreading. I conclude that there are good arguments that values spreading is the most effective activity, but it has some serious considerations against it and global catastrophic risk reduction looks more important.
Global Catastrophic Risk Reduction
I include existential risks as a type of global catastrophic risk (GCR). Nick Bostrom has argued that existential risks are substantially worse than other GCRs. Nick Beckstead disagrees, and I find Beckstead’s case persuasive.
It appears to be the case that either (1) working on GCR reduction in general is the best thing to do, in which case there may be multiple different cause areas within GCR that are more effective than any non-GCR causes; or (2) working on GCR is not the best thing to do, in which case all cause areas within GCR are similarly ineffective. (In case (1), lots of GCR interventions may still be ineffective, for example, research on preventing alien hamster invasions.)
Is preventing human extinction good?
This section getting so in-depth that I moved it into a separate article. In summary: there are a few reasons why the impact of humanity on the far future could be negative; but overall, it looks like humanity’s impact has a positive expected value (and will probably be positive), so it’s highly valuable to ensure that human civilization continues to exist.
Size of Impact
Preventing global catastrophic risk is a sufficiently important problem that fairly small efforts in the right direction can have much larger long-term effects than GiveWell-recommended charities (see Beckstead’s dissertation “On the Overwhelming Importance of Shaping the Far Future”). Successful GCR interventions probably have a bigger positive impact than anything else except possibly ensuring that the beings controlling the far future have good values (although I believe GCR reduction is probably more important, for reasons discussed above).
Strength of Evidence
Previously, I had major concerns about whether any GCR interventions had any effect. However, with Open Phil’s recent research into GCRs, I am more confident that there will emerge opportunities with sufficiently strong evidence of effectiveness that they make good giving targets. Open Phil has high standards of rigor, and I trust it to recommend interventions that have strong arguments in their favor.
Due to the haste consideration, I want to seriously consider donating to GCR interventions this year or next year. The evidence for the effectiveness of GCR organizations is uniformly much weaker than the evidence for GiveWell top charities, but this does not rule them out as contenders for the best cause area. Their overwhelming importance means I am willing to be more lenient about their strength of evidence than I would be for proximate interventions.
GCR reduction as a cause is highly neglected right now, but more large donors are showing an interest in the topic, so it’s plausible that it will receive more funding in the future. Even so, funding it now may provide more information for future donors and help the field grow more quickly. Additionally, there’s the haste consideration: we don’t know when a major global catastrophe will occur or how long it will take to prepare for, so we should begin preparing as early as possible. Something like unfriendly AI is probably at least two decades away, but it will probably take more than two decades to develop solid theory around friendly AI.
It’s not obvious what counts as evidence that a GCR intervention is working. I discuss this specifically with regard to the individual organizations that I consider.
Preventing unfriendly AI might successfully avert human extinction, which would have an extremely large impact. Furthermore, building a friendly AI is plausibly more important than any other GCR if it enables us to produce astronomically good outcomes that we would not be able to produce otherwise.
Friendly AI and Non-Human Animals
Given that non-human animals may dominate the expected value of the far future, it’s important that an AI gives them appropriate consideration. I discuss this issue in a few places in this document. Here I have a couple of additional quick points:
- An AI-controlled future could be much better or much worse than a human-controlled future because a superintelligent AI would have more power to shape the universe than unaided humans.
- Research on cooperation between goal-agents is possibly more valuable than research on encoding human values, because it’s probably less likely to lead to a far future that’s very bad for animals.
GCRs Other than AI Safety
I do not know of any good ways to fund organizations putting work into individual GCRs other than AI safety. I agree with Open Phil’s assessment that biosecurity and geoengineering are highly promising (although I have only briefly investigated these areas, so much of my confidence comes from Open Phil’s position and not from my own research). I do not see reason to believe that some GCR other than AI risk is substantially more important; no GCR looks much more likely than AI risk, and right now it looks much easier to efficiently support efforts to improve AI safety than to support work on other major GCRs. I expect that there’s perhaps some geoengineering research worth funding, but I don’t have the expertise to identify it and I don’t know how to find geoengineering research that’s funding-constrained.
FLI has published a list of organizations working on biotechnology, although if I tried to read through this list and find ones worth supporting, I would do a poor job; if someone with some domain knowledge looked through these to find ones potentially worth donating to, that could be highly valuable. I believe I understand AI safety well enough to roughly assess organizations in the field, but this is not the case with any other GCR.
If I did come to the conclusion that some specific GCR other than AI safety was the most important, I should probably try to use Open Phil’s research to learn more. I discuss this potential giving opportunity in “Open Phil-Recommended GCR Charities” below. I would also encourage other EAs, especially those with some knowledge about some relevant field such as biosecurity, to explore the available options and publicly write about any good giving opportunities they find.
If a sufficiently strong giving opportunity arises in a field of GCR reduction other than AI safety, I will seriously consider it; but at present I don’t see any.
Effective charities that work to reduce GCRs may be many times better than global poverty charities, so organizations that create new EAs may largely be valuable insofar as they create new donations to GCR charities. If I thought global poverty were the best cause, then meta-organizations that attempt to grow the donation base may be even better. But if GCR reduction is vastly more important, then movement-building charities produce most of their value from a small set of donors who support GCR reduction.
It’s possible that there exist movement-building organizations that produce a sufficiently large benefit to outweigh donations to effective GCR charities. I discuss this possibility when looking at individual movement-building organizations below. But in the general case, I expect that donating directly to the best object-level charity will have a higher impact than donating to movement building organizations.
There are a few additional concerns with supporting movement building; Peter Hurford discusses the most important ones here.
Meta-research (which mostly means charity evaluation, although it also could include things like Charity Science’s research on fundraising strategies) potentially has a lot of value if it successfully discovers new interventions with bigger impact or room for more funding than any current interventions that are popular among EAs. It’s hard to predict when this will actually happen, and depends on to what extent you believe EAs have identified the best interventions already. But I’m generally optimistic about efforts to produce new knowledge.
Here I briefly discuss the major considerations for and against every organization I have seriously considered. The organizations are grouped roughly by category and otherwise listed in no particular order.
I do not discuss a number of potentially promising organizations because surface signs show that they are unlikely to be the most effective charity, and I couldn’t find good enough information about them to feel confident about donating to them.
Machine Intelligence Research Institute (MIRI)
Emotional disclosure: I feel vaguely uncomfortable about MIRI. Originally I was bothered by Eliezer’s lack of concern for animals and worried that he would make decisions to benefit humans at the expense of other conscious minds; MIRI’s new director Nate Soares does seem to give appropriate value to non-human animals, so this is less of a concern. I also was bothered by how hard it was to tell if it was doing anything good. Today, it is more transparent and produces more tangible results. This second concern may still be significant, but it is over-weighted in my emotional response to MIRI. I still have an intuitive reaction that AI research isn’t as good as actually helping people or animals, but I try to ignore this because I don’t believe it’s rational.
The evidence for MIRI’s effectiveness is considerably weaker than for GiveWell top charities. I have some concerns, but I see a number of reasons to expect that MIRI is succeeding at achieving its short-term goals, which gives me confidence in its organizational competence. It doesn’t have some of the same problems I see with FLI (which I discuss in the separate section on FLI), which makes me prefer MIRI over FLI.
Strength of Evidence
MIRI is trying to improve outcomes in the future, so it’s not clear what qualifies as evidence that MIRI is currently doing a good job. We can’t get direct evidence without predicting the future, so here are a few things I look for:
- Its researchers and leadership appear competent and devoted to the problem.
- It has high research output and its research is well-regarded by others in the field.
- It successfully convinces other AI researchers that alignment is an important problem.
- Respected AI researchers endorse MIRI as effective.
- It is transparent and makes an effort to publicly disclose its activities, accomplishments, and failures.
- Its researchers care about non-human animals.
Based on personal conversations with MIRI researchers and reading their public writings, I get the impression that they have a strong grasp on which sub-problems in AI safety are important and how to make progress on them. I have only had fairly limited personal interactions with MIRI researchers; the most extensive interaction I had was when I attended a MIRIx workshop where we discussed their paper “Robust Cooperation in the Prisoner’s Dilemma”. The problem this paper attempts to solve has clear relevance to AI safety–we would like superintelligent agents to cooperate with us and with each other on real-world prisoner’s dilemmas–and the paper makes obvious steps toward solving this problem while also outlining what remains unsolved.
More broadly, the items listed on MIRI’s technical agenda look like important and urgent problems. At the very least, MIRI appears competent at identifying significant research problems that it needs to solve. My impression is that MIRI is doing a better job than anyone else at identifying the important problems, although this is difficult to justify explicitly.
We have to consider how competent MIRI is compared to other researchers we could fund: perhaps if some other people were working on the sorts of problems that MIRI works on, they would solve them much more quickly and efficiently. I find this somewhat unlikely. I have read a lot of writings by Eliezer Yudkowsky, Luke Muehlhauser, and Nate Soares (Eliezer is the founder and senior researcher, Luke is the ex-director, and Nate is current director), and they strike me as intelligent people with strong analytic skills and a good grasp of the AI alignment problem. I briefly looked through the FLI grantees, and MIRI’s research plan seems more obviously important for AI safety than many of the grantees.
Although MIRI published little before 2014, it has started publishing more papers since then. I haven’t engaged much with its research papers but a cursory examination shows that they are probably relevant and valuable. It looks like most of MIRI’s papers are purely self-published, but a few have been accepted to respected conferences (including AAAI-15), although I don’t know how high a bar this is. This is another of MIRI’s weak points–there’s not clear evidence that other AI researchers respect its publications. MIRI papers are rarely cited by anyone other than MIRI itself and I would feel more confident about MIRI if it received more citations. This not a strong negative signal because AI safety is such a small field, but it’s certainly not a positive signal either.
Nate has discussed the fact that AI researchers appear more concerned about safety than they used to be, although it is unclear whether MIRI has had any causal role in bringing this about. Alyssa Vance lists a few prominent academics who are familiar or involved with MIRI’s work. I would like to see this trend continue–AI safety remains a small field, and few AI researchers work on safety full-time.
I don’t know much about this, but my understanding is that in the past year or so, FLI has done a good deal more than MIRI to generate academic interest in AI safety; but MIRI had done more in previous years, and FLI probably wouldn’t exist (or at least wouldn’t be concerned about AI) if it weren’t for MIRI. This suggests that MIRI has done a reasonably good job in the past of raising concern for AI safety, which is a good sign for MIRI’s competence. It certainly could have been much more successful–MIRI has existed for over a decade and AI safety has only recently begun gaining momentum. The idea of AI safety sounds prima facie absurd, so I’d expect it to be hard to convince people that it matters, but perhaps someone other than MIRI could still have done a better job raising concern. (Today FLI seems to be doing a better job, although this may largely come from the fact that MIRI is focusing less on advocacy and more on research.)
Stuart Russell has publicly endorsed the importance of AI safety work and serves as a research advisor to MIRI. The advisory board consists of professors and AI researchers. I don’t know what sort of relationship the advisors have with MIRI or to what extent serving as an advisor acts as an implicit endorsement of MIRI’s competence.
From what I have seen, MIRI is fairly lacking in endorsements from respected AI researchers. I do not know how likely it would be to get endorsements if it were doing valuable work, so I don’t know how concerning this is, but it certainly counts as evidence against MIRI’s effectiveness.
Nate has claimed that when he discusses the problems MIRI is working on with AI researchers, they agree that the problems are important:
I talk to industry folks fairly regularly about what they’re working on and about what we’re working on. Over and over, the reaction I get to our work is something along the lines of “Ah, yes, those are very important questions. We aren’t working on those, but it does seem like we’re missing some useful tools there. Let us know if you find some answers.”
Or, just as often, the response we get is some version of “Well, yes, that tool would be awesome, but getting it sounds impossible,” or “Wait, why do you think we actually need that tool?” Regardless, the conversations I’ve had tend to end the same way for all three groups: “That would be a useful tool if you can develop it; we aren’t working on that; let us know if you find some answers.”
Given that Nate is obviously motivated to believe that AI researchers value the work he’s doing, he could be cherry-picking or misinterpreting people’s claims here (I doubt he would do this deliberately but he may do it subconsciously or accidentally). It’s also possible that people exaggerate how important they believe his research is for the sake of politeness. He does not provide any specific quotes or name any researchers who endorse MIRI’s work as important, so I do not consider his claims here to be strong evidence.
MIRI makes some effort to make itself more transparent:
- It publishes monthly newsletters that describe its research and activities
- It writes annual reviews
- It occasionally writes up explanations of its papers on the MIRI blog
As far as I know, it was not doing any of these things three years ago, so this shows promise.
Even better, it has a detailed guide to what technical problems MIRI is researching and a technical agenda explaining why it works on the problems it does. These materials were published relatively recently, so MIRI is increasing transparency.
Concern for Animals
Strictly speaking, this doesn’t have anything to do with MIRI’s skill at AI safety work, but one of my major concerns with friendly AI research is that it could lead to the development of an AI that benefits humans at the expense of non-human animals. In a separate essay, I come to the conclusion that GCR reduction is probably valuable even considering its impact on non-human animals. Even so, I feel better about people doing AI safety research if they care about animals and are therefore motivated to do research that will not end up harming animals.
I am somewhat more optimistic because Nate Soares, the current director of MIRI, appears to place high value on non-human animals; I have spoken with him about this issue, and he agrees it would be bad if an AI did not respect the interests of non-human animals and that it’s a genuine concern. I briefly investigated the positions of most of the other full-time MIRI employees. From what I can glean from public information, it looks like Rob Bensinger places adequate value on non-human animals and has a good understanding of why it’s silly to not be vegetarian. Patrick LaVictoire apparently cares about animals, and Katja Grace talks as though she cares about animals but I find her arguments against vegetarianism concerning2 (Rob Bensinger has counterarguments). Eliezer Yudkowsky doesn’t believe animals are morally relevant at all. I don’t know about the rest of MIRI’s staff.
Room for More Funding
Based on MIRI’s fundraising goals and current funds raised, I expect that it has substantial room for more funding. It has laid out a fairly coherent plan for how it could use additional funds, and Nate believes it could effectively use up to $6 million. Although I am less confident about its ability to usefully deploy an additional $6 million than an additional, say, $1 million, it is unlikely to raise that much in the near future; I expect it to continue to have a substantial funding gap.
AI safety is attracting considerably more attention: Elon Musk has donated $10 million, and other donors or grantmakers may put more money into the field. This is still fairly uncertain, and I don’t want to count on it happening; plus, I expect MIRI to have a better idea of which problems matter than most AI researchers or grantmakers (MIRI researchers have been working full-time on AI safety for a while), so funding MIRI probably matters more than funding AI safety research in general.
I’m concerned that FLI did not make larger grants to MIRI; this reflects negatively on MIRI’s potential room for funding. I suspect FLI is being too conservative about making grants, but they have more information than I do, so it’s hard to say. This is one of my primary concerns with MIRI. I’ve tried to find out more information about FLI’s decision, here but their grantmaking process involved confidential information so there’s a limit to what I can learn.
Future for Humanity Institute (FHI) and Centre for the Study of Existential Risk (CSER)
Both of these organizations are potentially high value, but representatives of both organizations have claimed that they are not currently funding constrained.
Neil Bowerman from FHI:
I would argue that FHI is not currently funding constrained….We could of course still use money productively to hire a communications/events person, more researchers and to extend our runway, however at present I would suggest that funding x-risk-oriented movement building, for example through Kerry Vaughan and Daniel Dewey’s new projects, is a better use of funds than donating to FHI for EA-aligned funding. source
Sean O hEigeartaigh from CSER:
We’re not funding constrained in the large at the moment, having had success in several grants. We have good funding for postdoc positions and workshops for our initial projects. Most of our funding has some funder constraints, and so we may need small scale funding over the coming months for ‘centre’ costs that fall between the cracks, depending on what our current funders agree for their funds to cover – one example is an academic project manager position to aid my work. source
Both of these people posted comments on a Facebook thread after Eliezer said these organizations were funding-constrained. Apparently a good way to find information about an organization is to make public, incorrect claims about it.
Edited 2015-09-21 to add: The fact that these organizations claim they don’t have room for more funding makes me more confident that they’re optimizing for actually reducing existential risk rather than optimizing for personal success. If one of them does become substantially funding-constrained in the near future, I consider it fairly likely that it will be the best giving opportunity.
Future of Life Institute
FLI organized the “Future of AI” conference on AI safety and funded AI research projects that cover a somewhat broader range than MIRI’s research does. It has future plans to expand into biosecurity work but at the time of this writing it has not gotten beyond the early stages.
Size of Impact
I expect the median FLI grant to be less effective than the same amount of money given to MIRI, but due to its breadth it may hit upon a small number of extremely effective grants that end up making a large difference. That said, the broader approach of FLI looks more reasonable to fund for someone who doesn’t have strong confidence that MIRI is effective at reducing AI risk.
Some of FLI’s AI grants are probably highly effective. However, I find some of them concerning. Some of the research projects attempt to make progress on inferring human values. If the inferred human values are harmful (more specifically, they do not assign sufficient value to non-human animals or other sorts of non-human minds), the AI could produce very bad outcomes such as filling the universe with wild-animal suffering. I think this is more likely not to happen than to happen, but it’s a substantial concern, and it’s an argument in favor of spreading good values to ensure that if AI researchers create a superintelligent AI, they give it good values.
I do not have the same concern with MIRI: I have spoken to Nate Soares about this issue, and he agrees that encoding human values (as they currently exist) in an AI would be a bad idea, in part because it might give insufficient weight to non-human animals.
Room for More Funding
After working closely with FLI during the receipt and evaluation of proposals, we determined that the value of high quality project proposals submitted was greater than the available funding. Consequently, we made a grant of $1,186,000 to FLI to enable additional project proposals to be funded.
It sounds like Open Phil gave FLI exactly as much money as it believed it needed to fund the most promising research proposals. This makes me believe that FLI has no room for more funding. Even if FLI had wanted to fund more grants, I don’t believe I could actually allow them to do so.
Suppose FLI has $X and would like to have $(X+A+B+C). Open Phil believes FLI should have $(X+A+B). If I do nothing, Open Phil will give $(A+B) to FLI. If I give $A to FLI, Open Phil will give $B, so either way FLI ends up with $(X+A+B). I cannot give money to FLI after Open Phil does be causes FLI will have finished making grants by then. I believe this model approximately describes the situation during the previous round of grantmaking and probably describes future rounds; so my donations only serve to reduce the amount of money that Open Phil gives to FLI.
Open Phil-Recommended GCR Charities
At the time of this writing, Open Phil has not produced any recommendations on GCR interventions that small donors can viably support, and probably won’t for a while. In fact, it’s not even clear that it has any plans to do so. I looked through Open Phil’s published materials and could not find anything on this.
(Edited 2015-09-16 to clarify.)
But if Open Phil does produce recommendations for small donors, it’s likely that one or some of these recommendations will represent better giving opportunities than any existing GCR charity that I could identify on my own.
Size of Impact
GCRs have possibly the largest impact of any cause area; the case for this has been made before and does not need to be repeated here. Presumably, Open Phil recommendations will have as large an impact as any other organizations working in the GCR space, although it’s fairly likely that Open Phil will not find any organizations that have a higher impact than organizations like MIRI or FHI that are well known in EA circles.
Waiting for Open Phil means losing out on any value that could be generated between now and then, including both direct effects and learning value. The haste consideration weighs heavily in favor of supporting organizations now rather than waiting for Open Phil.
Strength of Evidence
I expect strength of evidence to be the main benefit of Open Phil-recommended organizations over current organizations. Although Open Phil focuses on more speculative causes than GiveWell classic, it still does extensive research into cause areas, and I would expect it to recommend specific interventions if it has strong reason to believe that they are effective. Right now, the organizations working on GCR reduction have only weak evidence of impact, and Open Phil will likely change this.
Room for More Funding
Although Open Phil-recommended GCR organizations may be the best giving opportunities in the world, I have major concerns about their neglectedness. Right now Good Ventures has more money than it knows how to move, and it could fill the room for more funding for all of Open Phil’s recommendations on GCR reduction. If I donate to GCR, it may only displace donations by Good Ventures. I see this as a major argument against waiting for Open Phil recommendations. It’s possible that Open Phil will find massively scalable opportunities in this space, but it does not seem likely that it will find anything so scalable that it can absorb any funds Good Ventures directs at it and still have more room for more funding.
GiveWell/Open Philanthropy Project
(Here I use GiveWell to refer to both classic GiveWell and Open Phil.)
Size of Impact
(Edited 2015-09-16 to expand on my reasoning.)
I consider it likely that Open Phil’s work on GCRs will find interventions that are more effective than anything EAs are currently doing. But it seems rather unlikely that their other current focus areas (except possibly factory farming) will produce anything as effective. Over the next 5-10 years the existing institutions working on GCR reduction may run out of room for more funding as the EA movement grows and/or GCR reduction efforts attract more interest, in which case Open Phil-type work of seeking out new interventions would be especially valuable; but I don’t think we’re there yet. It’s also unclear to what extent GiveWell can use additional funds from small donors to produce recommendations more quickly.
If I believe that GCR interventions are much more effective in expectation than most other sorts of interventions (which I do), then Open Phil’s effectiveness gets diluted whenever it works on anything other than GCR reduction. I understand that Open Phil/Good Ventures want to fund a broader range of interventions, and that may make sense for someone with as much money as Good Ventures; but if I believe they are leaving funding gaps in GCR interventions then I can probably have a bigger impact by funding those interventions directly rather than by supporting Open Phil.
Strength of Evidence
GiveWell appears to apply much more rigor and clear thinking to charity analysis than anyone else. I trust its judgment more than my own in many cases. I am concerned that it does not place sufficient attention on sentient beings other than humans. Open Phil recently committed $5 million to factory farming, which I find promising but ultimately much too limited. Good Ventures recently committed to a $25 million grant to GiveDirectly; it’s plausible that the factory farming grant will do perhaps two or three orders of magnitude more good per dollar than the GiveDirectly grant and should be receiving a lot more money. If GiveWell as an organization shared my values about the importance of animals, I might be more likely to support it, but their current spending patterns make me reluctant.
Room for More Funding
(Edited 2015-09-16 to clarify.)
Good Ventures currently pays for a large portion of GiveWell’s operating expenses, and GW has no apparent need for funding. It wants to maintain independence from Good Ventures by keeping other sources of funding, but I do not find this consideration very important. If Good Ventures stops backing them and GiveWell finds itself in need of funding, I will reconsider donating.
I don’t know how much influence Good Ventures has over GiveWell’s activities. I’m considerably less confident in Good Ventures’ cause prioritization skills than GiveWell’s, so I prefer that GiveWell has primary control over Open Phil’s cause selection, although I don’t know how much this matters. I wish GiveWell were more transparent about this.
Animal Charity Evaluators (ACE)
Emotional disclosure: I feel a strong positive affect surrounding ACE, so I may end up overrating its value. The thought of giving money to ACE makes me feel like I’m making a big difference on an emotional level, and this feeling probably biases me in ACE’s favor.
Size of Impact
ACE has three plausible routes to impact that I can see.
- It could discover new effective interventions.
- It could produce stronger evidence for known interventions and thus persuade more donors to direct money there.
- It could produce strong evidence that known interventions are ineffective and thus direct money elsewhere.
On #1: I expect there are highly effective methods of helping animals that are not yet tractable or even well-understood, such as environmental interventions to reduce wild-animal suffering. ACE cares about wild-animal suffering and would likely do research on under-examined and potentially high-impact topics such as this if it had a lot more funding. It’s unlikely that my funding would push ACE over the edge to where it decides to invest more in research of this sort; but it cannot do this research unless it has more funding, and it cannot get more funding unless people like me provide it. ACE is also small enough that if I requested that it do more research in some area, it would probably be willing to entertain the possibility.
On #2: I have met several people who do not donate to animal charities solely because they think the evidence for them is too weak. If ACE produced higher-quality research supporting animal charities, this would almost certainly persuade some people to donate to them; but I don’t know how much money would be directed this way.
On #3: If ACE discovers that popular interventions are much less effective than previous evidence showed, to the point where GW top charities look more effective and more donors start giving to GW charities instead, the maximum impact here would be the impact had by marginal donations to GW top charities. This is a much smaller impact than the plausible impact of effective animal charities. Certainly if current interventions are ineffective then we want to know, but discovering this is much less valuable than discovering that some factory farming intervention is definitively 10x more impactful than a GW top charity. Changing impact from 0x to 1x is much less important than changing from 1x to 10x.
Edited 2015-09-16 to add: ACE may find that some types of interventions are ineffective while others are effective and thus direct funding to the more effective interventions. This would be about as valuable as #2, and possibly more effective because this sort of evidence might be able to move more money. Thanks to Carl Shulman for raising this possibility.
It is unclear how to reason about the probability of #1 and #3 or the effect size of #1 and #2, so I do not know much about the expected impact of each. Number 1 certainly has the highest upside and makes me the most optimistic about the value of donations to ACE. I would like to see rigorous work on interventions to help animals on a massive scale (such as wild animals or animals in the far future). Right now, we’re nowhere close to being able to produce this sort of work, but the best way I can see to push us in that direction is to support ACE.
As I explain in “Is Preventing Human Extinction Good?”, I see good reason to be optimistic about the long-term impact of humanity on all sentient life. In the words of Carl Sagan, “If we do not destroy ourselves, we will one day venture to the stars” (and biologically engineer animals to be happy). Thus it looks like ensuring humanity continues to survive is more important than reducing wild-animal suffering in the medium-term future.
It’s possible that spreading concern for wild animals will have a massive effect on the far future; but it’s not at all clear that ACE research will ever have this effect, even if it does research on reducing wild animal suffering. I discuss my general concerns with values spreading in “On Values Spreading”. Even so, I believe ACE has a decent chance of being the most effective charity. It’s not too unlikely that if ACE had substantially more funding, it would find an intervention or interventions that are more effective than anything that currently receives funding. This makes supporting ACE look like a promising option.
Strength of Evidence
ACE does not have as strong a reputation as GiveWell, although it is a much newer and smaller organization so this is to be expected. The interactions I have had with employees and volunteers at ACE have left me with a strong positive impression of their competence and concern for the problems they are attempting to solve. Their research results have not been nearly as in-depth as GiveWell’s, but ACE acknowledges this. This is largely a product of the lack of studies that have been done on animal advocacy. ACE is making some efforts to improve the state of research, and these efforts look promising. I spoke to Eric Herboso3 about this, and he had clearly put some thought into how ACE can improve the state of research.
Room for More Funding
ACE appears to have strong ability to absorb more funding. Right now it has a budget of only about $150,000 a year–not nearly enough to do the sort of large randomized controlled trials that it wants. I expect ACE could expand its budget several-fold without having much diminishing marginal effectiveness. Additionally, if it expanded, it could broaden its scope, putting more effort into researching wild animal suffering or other speculative but potentially high-impact causes.
ACE does research and publicly publishes its results, so I believe donations to ACE have particularly high learning value. Peter Hurford has argued “when you’re in a position of high uncertainty, the best response is to use a strategy of exploration rather than a strategy of exploitation.” I expect donations to ACE to produce more valuable knowledge than donations almost anywhere else, which makes me optimistic about the value of donations to ACE. In particular, I expect ACE to produce substantially more valuable research per dollar spent than GiveWell.
Animal Ethics (AE) and Foundational Research Institute (FRI)
Both these organizations do high-level research and values spreading for fairly unconventional but important values like concern for wild animals. I wouldn’t be surprised if one of these turned out to be the best place to donate, but I don’t know much about their activities or room for more funding and I’ve had difficulty finding information. The only thing I can see them publicly doing is publishing essays. While I find these essays valuable to read, I don’t have a good picture of how much good this actually does.
A note to these organizations: if you were more transparent about how you use donor funds, I would more seriously consider donating.
Giving What We Can (GWWC)
I’m skeptical about the value of creating new EAs because the 2014 EA survey showed that the average donation size was rather small. However, Giving What We Can members are probably substantially better than generic self-identified EAs because GWWC carefully tracks members’ donations. I can’t find any more recent data, but from 2013 it looks like members have a fairly strong track record of keeping the pledge.
At present, only a tiny fraction of GWWC members’ donations go toward GCR-reduction or animal-focused organizations, which may be much higher value than global poverty charities. Based on GWWC’s public data, it has directed $92,000 to far-future charities so far (and apparently $0 to animal charities, which I find surprising). If we extrapolate from GWWC’s (speculative) expected future donations, current members will direct about $287,000 to far-future charities. That’s less than GWWC’s total costs of $443,000, but the additional donations to global poverty charities may make up for this. But I’m skeptical if GWWC will have as large a future impact as it expects to have (a 60:1 fundraising ratio seems implausibly high), and it’s not clear how many of its donations would have happened anyway. I know a number of people who signed the GWWC pledge but would have donated just as much if they hadn’t. (I don’t know how common this is in general.) Additionally, I don’t see a clear picture of how donations to GWWC translates into new members. GWWC might raise more money than Charity Science or Raising for Effective Giving (both discussed below), but I have a lot more uncertainty about it which makes me more skeptical.
These various factors make me inclined to believe that directly supporting GCR reduction or high-learning-value organizations will have greater impact that supporting GWWC.
Charity Science has successfully raised money for GiveWell top charities (it claims to have raised $9 for every $1 spent) through a variety of fundraising strategies. It has helped individuals run Christmas fundraisers and created a Shop for Charity browser extension that allows you to donate 5% of your Amazon purchases at no cost to you. It has plans to explore other methods of fundraising such as applying the REG model to other niches and convincing people to put charities in their wills.
Size of Impact
Right now Charity Science focuses on raising money for GiveWell top charities. Its fundraising model looks promising–it tries a lot of different fundraising methods, so I think it’s likely to find effective ones–but I expect that the best charities are substantially higher-impact than GiveWell top charities, so this leads me to believe that donations to Charity Science are not as impactful as donations to highly effective far-future-oriented charities. I spoke with Joey Savoie, and he has considered doing research on effective interventions to help non-human animals. This is promising, and I may donate to Charity Science in the future if it ever focuses on this, but for now its activities look less valuable than ACE or REG (see below).
Edited 2015-09-16 to add: Carl Shulman points out that Charity Science’s 9:1 fundraising ratio substantially undervalues the opportunity cost of staff time, so the effective fundraising ratio is less than this. This looks like a bigger problem for Charity Science than for the other fundraising charities I consider.
Room for More Funding
Based on Charity Science’s August 2015 monthly report, it looks like it could use new funding to scale up and broaden its activities. It has enough ideas about activities to pursue that I believe it could deploy substantially more funds without experiencing much diminishing marginal utility.
Donations to Charity Science will likely have high value in terms of learning how to effectively raise funds. I’m uncertain about how valuable this is; I feel more confident about the value of learning about object-level interventions and I’m somewhat wary of movement growth as a cause, largely for reasons Peter Hurford discusses here.
Raising for Effective Giving (REG)
Size of Impact
In 2014, REG had a fundraising ratio of 10:1, about the same as Charity Science’s. I am somewhat more optimistic about the value of REG’s fundraising than Charity Science’s because REG has successfully raised money for far future and animal charities in addition to GiveWell recommendations. For details, see REG’s quarterly transparency reports. In the conclusion, I look at REG’s fundraising in more detail (including how much it raises for far future and animal charities) to try to assess how much value it has.
Strength of Evidence
The case for REG’s effectiveness appears pretty straightforward: it has successfully persuaded lots of poker players to donate money to good causes. Along with other movement-building charities, REG faces a concern about counterfactuals: how many of REG-attributed donations would have happened anyway? I believe this is a serious concern for Giving What We Can–many people who signed the pledge would have donated the same amount anyway (I’m in this category, as are many of my friends).
REG’s case here looks much better than the other EA movement-building charities I’ve considered. REG focuses its outreach on poker players who were previously uninvolved in EA for the most part. Even if they were going to donate substantial sums prior to joining REG, they almost certainly would have given to much less effective charities.
Room for More Funding
REG is small and has considerable room to expand. They have specific ideas about things they would like to do but can’t because they don’t have enough money. I expect REG could effectively make use of an additional $100,000 per year and perhaps considerably more than that. This is not a lot of room for more funding (GiveWell moves millions of dollars per year to each of its top charities), but it’s enough that I expect REG could effectively use donations from me and probably from anyone else who might decide to donate to them as a result of reading this.
REG receives funding through the Effective Altruism Foundation (EAF), but
you can donate through REG’s donations page and the funds are earmarked for REG you can donate to EAF (formerly known as GBS Switzerland) and earmark your donations for REG.
REG looks less exploratory than Charity Science so it probably has worse learning value, but it’s still pursuing an unusual fundraising model with a lot of potential to expand (especially into other niches). REG appears to have fairly strong learning value, and I want to see what sorts of results it can produce moving forward.
I know of a handful of other organizations that might be highly effective, but I don’t have much to say about. For these, I don’t have a strong sense of whether what they do is valuable, and they look sufficiently unlikely to be the best charity that I didn’t think they were worth investigating further at this time. I have included a brief note about why I’m not further investigating each charity.
- Global Priorities Project: insufficiently transparent about activities
- 80,000 Hours: unclear whether it has a positive effect
- Direct Action Everywhere: evidence of effectiveness is too murky
- Nonhuman Rights Project: weak evidence of effectiveness, unclear what partial success looks like
- EA Ventures: insufficiently transparent about who gets money
I have selected three finalist charities that are all plausibly the best, but they are in substantially different fields and therefore difficult to compare.
Brief explanations for charities I’m not supporting
Here I list all the charities I considered but are not finalists and briefly explain why I have chosen not to support them.
- GiveWell-recommended global poverty charities: small effect size relative to GCR reduction
- ACE-recommended veg outreach charities: small effect size relative to GCR reduction; weak evidence
- FHI, CSER: limited room for more funding
- FLI: weaker case than MIRI; concerns about encoding human values; Open Phil will fill room for more funding
- Future Open Phil-recommended GCR interventions: Good Ventures/other donors may fill room for more funding; money now is worth substantially more than money in a few years
- GiveWell/Open Phil: Good Ventures will fill room for more funding; less valuable than ACE
- Animal Ethics, Foundational Research Institute: too much uncertainty about whether they’re doing anything effective
- GWWC: unclear value
- Charity Science: raises for less effective charities than REG
I have narrowed the list of considered charities to three finalists:
- Machine Intelligence Research Institute (MIRI)
- Animal Charity Evaluators (ACE)
- Raising for Effective Giving (REG)
Here I give the advantages of each of them over the others.
In Favor of MIRI over ACE
- GCR reduction probably matters more than helping animals in the short term or spreading concern for animals, and AI safety looks like the most important and neglected GCR.
In Favor of ACE over MIRI
- I have a little more confidence that ACE leadership is good at achieving its goals.
- ACE has better learning value. Due to the nature of its work, its activities produce a lot of new information, and ACE researchers are trying hard to make this information high-value.
- ACE looks more funding-constrained, and animal welfare will probably continue to be an unpopular cause for longer than AI safety will. Similarly, funding now could do a lot to help ACE expand, whereas MIRI has stronger momentum.
In Favor of REG: Weighted Donation Multiplier
To get an idea of the value of REG’s fundraising, I looked at the charities for which they have raised money and assigned weightings to them based on how much impact I expect they have. I created two different sets of weightings: one where I assume AI safety is the most impactful intervention (with MIRI as the most effective charity) and one where I assume animal welfare/values spreading is highest leverage (with ACE as the most effective charity). The AI model reflects my current best guesses, but I created the animal model to see what sorts of results I would get.
This table shows how much money REG raised in each category over its four quarters of existence to date (in thousands of dollars), taken from its lovely transparency reports:
I used these fundraising numbers and assumed REG’s expenses through 2015Q2 are $100,000, extrapolating from 2014’s expenses of $52,318.
For my two models I used the following weights:
|Category||AI-Model Weight||Animal-Model Weight|
- Veg advocacy includes charities that promote vegetarianism and meat reduction. All charities in this category were ACE recommendations or ACE standout charities.
- Speculative includes unconventional organizations that share my concern for non-human animals, including Animal Ethics and the Nonhuman Rights Project.
- Other includes everything else; most of this money went to GiveWell top charities.
For GBS, I conservatively assume that all money directed toward GBS goes to activities other than REG (and I give these activities a weight of 0.2). Accounting for GBS funding going back to REG involves some complications so to be conservative I ignore any compounding effects that occur this way.
It’s not unlikely that categories in this model vary much more in effectiveness than the weights I have listed here. I decided to keep all the weights relatively close together because I do not have strong confidence about how much good each of these categories do. I might be able to make an inside-view argument that, say, MIRI is 1000x more effective than anything else on this list, but from the outside view, I shouldn’t let such an argument carry too much weight.
In the AI/MIRI model, I found that $10 of REG expenditures produced about $16 of weighted donations; in the ACE/animal model, every $10 spent produced $15 of weighted donations. This means that $10 to REG produced about $16 in equivalent donations to MIRI in the first model, and $15 in equivalent donations to ACE in the second model.4
When we weight the charities that REG has produced donations for, its fundraising ratio drops from 10:1 to a much more modest 1.5:1. Donating to REG instead of directly to an object-level charity produces an additional level of complexity, which means my money has more opportunities to fail to do good. A 1.5:1 fundraising ratio is probably high enough to outweigh my uncertainty about REG’s impact, but not by a wide margin.
But there’s another argument working in REG’s favor. I have considerable uncertainty about whether it’s more important to support values spreading-type interventions like what ACE or Animal Ethics does, or to support GCR reduction like MIRI. GCR reduction looks a little more important, but it’s a tough question. The fact that REG produces a greater-than-one multiplier using both a MIRI-dominated model and an ACE-dominated model means that if I donate to REG, I produce a positive multiplier either way. If I choose to donate to either MIRI or ACE, I could get it wrong; but if I donate to REG, in some sense I’m guaranteed to “get it right” because donations to REG probably produce greater than $1 in both MIRI-equivalent and ACE-equivalent donations.
I don’t want to put too much value on this fundraising ratio because there are various reasons why it could be off by a lot. It appears to show that REG fundraising is valuable even if you discount most of the charities it raises money for, which was my main intention. This alone is not sufficient to demonstrate REG’s effectiveness to my mind, but its leadership looks competent and its model has reasonably strong learning value.
A caveat: just because REG has raised a lot of funds for MIRI and animal charities in the past doesn’t mean it will continue to do so. But it raised these funds from a number of different people and over multiple quarters, so this is good reason to believe that it will continue to find donors interested in supporting MIRI and ACE/animal charities. Additionally, Ruairi Donnelly, REG’s Executive Director, has said to me in private communication that REG is meeting with more donors who want to fund far-future oriented work and that he hopes REG will move more money to these causes in the future.
There’s a concern about whether REG will continue to raise as much money per dollar spent as it has in the past. I expect REG to experience diminishing returns, although it is a new and very small organization so returns should not diminish much in the near future. I don’t have a strong sense for the size of the market of poker players who might be interested in donating to effective causes. It looks considerably bigger than REG’s current capacity so REG has some room to scale up, but I don’t know how long this will continue to be true. If REG’s fundraising ratio dropped to 5:1 and it didn’t increase funding to far-future charities, I would probably not donate to it; but it seems unlikely that it will drop that much in the near future.
Edited 2015-10-17 after making the donation to REG.
Based on all these considerations, it looks like Raising for Effective Giving is the best charity to fund. My main concern here is falling into a meta trap. One possible solution here is to split donations 50/50 between meta- and object-level organizations. If I were to do this, I would give 50% to REG and 50% to MIRI. But I believe the EA movement could afford to be more meta-focused right now, so I feel comfortable giving 100% of my donations to REG.
After publishing this post, I spent one month talking to people about it and considering the issues involved. Nothing substantially updated my beliefs on cause selection during this period, so I directed my entire donation budget to REG.
How to Donate
You can donate to REG by visiting its donations page and specifying that you want to give 100% of your money to REG operating expenses. If you click through to the next page, it will give you instructions on how to donate based on what country you’re in.
If you live in the United States, you can make your donation tax-deductible by giving to GiveWell and asking it to forward the money to REG.
Where I’m Most Likely to Change My Mind
- Values spreading might be more important to fund than GCR reduction.
- REG might not have as large a donation multiplier as it appears to.
- Many of the charities REG directs donations to might be worse relative to the best object-level charity than I assumed, so donating directly to the best charity would have greater impact.
- Current far-future-focused interventions might have too-weak evidence supporting them.
I’ve had conversations with people who believe each of these, and while I’m unpersuaded right now, I find their positions plausible.
REG’s fundraising ratio is less than 1:1 for both MIRI and ACE, but I still consider it more valuable than direct donations to either MIRI or ACE individually. I explain why in the section on Raising for Effective Giving and in the conclusion. ↩
How to assess whether a person gives adequate concern to non-human animals could be the subject of an entire additional essay, but I don’t have a clear enough picture of how to do this to write well on the subject. My general impression is that people who claim to care about animals but have some justification for non-vegetarianism probably don’t actually care as much about animals as they say they do. They sometimes claim that the time and effort spent not eating animal products could be better spent donating to efficient charity (or something), but then don’t make trivial but hugely beneficial choices such as eating cow meat instead of chicken meat. I’m somewhat more convinced by people who eat animals but donate a lot of money to charities like The Humane League; I understand that vegetarianism is harder for some people than others, but actions signal beliefs more strongly than words do. ↩
Eric Herboso used to work at ACE as the Director of Communications; he’s currently earning to give while volunteering for ACE part time. ↩
It’s probably a coincidence that both models ended up with about the same weighted fundraising ratio. MIRI received about as much funding as ACE plus speculative non-human-focused charities, so these balance out in the two models. ↩