Here are some research topics on cause prioritization that look important and neglected, in no particular order.

  1. Look at historical examples of speculative causes (especially ones that were meant to affect the long-ish-term future) that succeeded or failed and examine why.
  2. Try to determine how well picking winning companies translates to picking winning charities.
  3. In line with 2, consider if there exist simple strategies analogous to value investing that can find good charities.
  4. Find plausibly effective biosecurity charities.
  5. Develop a rigorous model for comparing the value of existential risk reduction to values spreading.
  6. Perform basic analyses of lots of EA-neglected or weird cause areas (e.g. depression, argument mapping, increasing savings, personal productivity–see here) and identify which ones look most promising.
  7. Reason about the expected value of the far future.
  8. Investigate neglected x-risk and meta charities (FHI, CSER, GPP, etc.).
  9. Reason about expected value estimates in general. How accurate are they? Do they tend to be overconfident? How overconfident? Do some things predictably make them more reliable?

1. Historical speculative causes

We want to positively influence the far future, but doing that is really hard. One way to get a better sense of how to do this might be to look at historical cases where people tried to influence the future (or otherwise pursued highly speculative strategies), examine cases that worked and didn’t, and try to find patterns. If we get good information on this, it could tell us a lot about whether it’s worthwhile to support organizations that are trying to affect the far future, and what we should look for as signs of probable success.

2. Picking winning companies/charities

Are people who are good at picking strong companies also good at picking strong charities? These skills seem strongly related–they both involve predicting the effects of an organization in a complex system. Picking winning charities may be easier because the charity market is probably less efficient than the equities market, but it still seems to be reasonably efficient (Open Phil reported that when it started investigating the most promising global health interventions, it found that most of them had been fully funded already). If these skills are related, a reasonable career path for altruists might be to spend a few years in a hedge fund or wealth management firm and then move into charity evaluation. It also suggests that perhaps we should recruit effective altruists out of investment firms.

3. Charity value investing

The founders of GiveWell like to say that finding exceptional charities is harder than most people believe. GiveWell has put in a lot of work to find effective charities, and we can be reasonably confident that their recommendations are much better than most organizations. This makes GiveWell analogous to a hedge fund that puts tons of research into individual companies to find ones that outperform the market. (It’s probably not a coincidence that GiveWell’s founders used to be hedge fund analysts.) Do there exist ways to do even better by exploiting “market inefficiencies” in the charity market?

We actually already know about a few market inefficiencies: donors care more about humans in developed countries than humans in developing countries; they care too little about humans than non-human animals; they don’t sufficiently consider effects on the far future. Effective altruists already take advantage of these inefficiencies. Are there any others that we aren’t exploiting?

4. Biosecurity interventions

Biosecurity looks like one of the most important global catastrophic risks. Governments are devoting some resources into this area, but there’s little work being done in the nonprofit sector; and without having looked into it much, my prior is that most government activities on biosecurity are relatively ineffectual.

Open Phil is currently devoting substantial time to researching biosecurity, and is making grants with a primary aim of getting more useful information. The space is complex and I don’t know that most individual philanthropists have enough information to make informed decisions. Nonetheless, it might be useful to look for funding-constrained organizations working in biosecurity that are plausibly highly effective. I doubt we could gain a high level of confidence in such an organization, but we could probably identify a biosecurity organization that looks as strong as, say, MIRI.

Carl Shulman posted a comment on Slate Star Codex where he gives a short list of organizations working on biosecurity. I briefly looked at the organizations, and none of them looks easy to evaluate as someone who knows almost nothing about biosecurity. But I do not expect it would require a high level of expertise to become sufficiently confident. At the time of this writing I feel confident supporting MIRI; I know considerably more about computer science and AI than most people, but I am by no means an expert. Much of my confidence comes from my understanding that MIRI employees have a good grasp on the problem and have strong relevant skills. I expect that someone with about 1000 hours of experience in biosecurity could identify a biosecurity intervention with the same level of confidence that I have about MIRI.

5. Comparing existential risk to values spreading

Paul Christiano, Brian Tomasik, and I have written surface-level discussions on whether we can expect existential risk reduction or values spreading to have a better positive effect on the long-term future. This question is critically important, but I do not observe a lot of people talking or writing about it. I know a number of people who realize that it’s an important question, then come to a quick judgment and don’t think about it in depth. Or perhaps people think about it much more carefully than I can see, but don’t talk publicly about their thoughts. In either case, this is a subject on which I would like to see more in-depth discussion and attempted resolutions.

6. Basic analyses of weird causes

Issa Rice is doing something like this with the cause prioritization wiki. The wiki is still fairly small–it looks like contributions to it are moving slowly, and it covers such a wide breadth that it’s hard to make progress–but I am interested in seeing work like this continue.

7. Expected value of the far future

I have heard a few people claim that it’s obvious that the far future has a positive expected value. This does not seem so obvious to me, and I’ve never seen a serious justification. I wrote my own analysis on the expected value of the far future, but I am not confident in my ability to think of every reasonable far-future scenario and assess their likelihoods. The question of what the far future looks like is critically important to how we act. Are we sufficiently confident that the future will be very good? Then we should probably work to reduce existential risk. Will the people who control the future likely have bad values? Then we should probably focus on spreading good values. Is there too much uncertainty about the far future? Then perhaps we should prepare such that we have more ability to effect change once we figure out what we need to do, or just focus on improving the world directly. Each of these strategies looks substantially different, and what we should do depends heavily on the question of what we expect the far future to look like. I have seen little writing on this subject other than shallow speculation, and I would like to see a lot more.

8. X-Risk and research charities

There are a few x-risk charities (Future of Humanity Institute, Centre for the Study of Existential Risk, Machine Intelligence Research Institute) and a few speculative research charities (Global Priorities Project, Foundational Research Institute, Effective Altruism Foundation) for which there do not exist any serious writeups of their activities and analysis of impact. I don’t expect that GiveWell will ever evaluate these organizations, or at least not in the near future.

In my charity selection writeup, I wrote a considerable amount on MIRI, and I identified that FHI and CSER do not currently have room for more funding (although I don’t expect that this will remain true forever). I would like to see analyses of the others at least.

9. Expected value estimates

Holden has written about why we can’t take expected value estimates literally and reasoning with explicit expected value estimates versus using a more holistic, multi-angled approach. GiveWell has a reasonably clear stance on the significance of expected value estimates (i.e. they’re not that important) and it has published some justifications for this, but the justifications involve a lot of judgment calls and aren’t that strong. Unfortunately, no one else has produced any substantially stronger arguments on when expected value estimates are appropriate.