Altruists often would like to get good predictions on questions that don’t necessarily have great market significance. For example:

  • Will a replication of a study of cash transfers show similar results?
  • How much money will GiveWell move in the next five years?
  • If cultured meat were price-competitive, what percent of consumers would prefer to buy it over conventional meat?

If a donor would like to give money to help make better predictions, how can they do that?

You can’t just pay people to make predictions, because there’s no incentive for their predictions to actually be accurate and well-calibrated. One step better would be to pay out only if their predictions are correct, but that still incentivizes people who may be uninformed to make predictions because there’s no downside to being wrong.

Another idea is to offer to make large bets, so that your counterparty can make a lot of money for being right, but they also want to avoid being wrong. That would incentivize people to actually do research and figure out how to make money off of betting against you. This idea, however, doesn’t necessarily give you great probability estimates because you still have to pick a probability at which to offer a bet. For example, if you offer to make a large bet at 50% odds and someone takes you up on it, then that could mean they believe the true probability is 60% or 99%, and you don’t have any great way of knowing which.

You could get around this by offering lots of bets at varying odds on the same question. That would technically work, but it’s probably a lot more expensive than necessary. A slightly cheaper method would be to determine the “true” probability estimate by binary search: offer to bet either side at 50%; if someone takes the “yes” side, offer again at 75%; if they then take the “no” side, offer at 62.5%; continue until you have reached satisfactory precision. This is still pretty expensive.

In theory, if you create a prediction market, people will be willing to bet lots of money whenever they think they can outperform the market. You might be able to start up an accurate prediction market by seeding it with your own predictions; then savvy newcomers will come and bet with you; then even savvier investors will come and bet with them; and the predictions will get more and more accurate. I’m not sure that’s how it would work out in practice. And anyway, the biggest problem with this approach is that (in the US and the UK) prediction markets are heavily restricted because they’re considered similar to gambling. I’m not well-informed about the theory or practice of prediction markets, so there might be clever ways of incentivizing good predictions that I don’t know about.

Anthony Aguirre (co-founder of Metaculus, a website for making predictions), proposed paying people based on their track record: people with a history of making good predictions get paid to make more predictions. This incentivizes people to establish and maintain a track record of making good predictions, even though they don’t get paid directly for accurate predictions per se.

Aguirre has said that Metaculus may implement this incentive structure at some point in the future. I would be interested to see how it plays out and whether it turns out to be a useful engine for generating good predictions.

One practical option, which goes back to the first idea I mentioned, is to pay a group of good forecasters like the Good Judgment Project (GJP). In theory, they don’t have a strong incentive to make good predictions, but they did win IARPA’s 2013 forecasting contest, so in practice it seems to work. I haven’t looked into how exactly to get predictions from GJP, but it might be a reasonable way of converting money into knowledge.

Based on my limited research, it looks like donors may be able to incentivize donations reasonably effectively with a consulting service like GJP, or perhaps by doing something involving predictions markets, although I’m not sure what. I still have some big open questions:

  1. What is the best way to get good predictions?
  2. How much does a good prediction cost? How does the cost vary with the type of prediction? With the accuracy and precision?
  3. How accurate can predictions be? What about relatively long-term predictions?
  4. Assuming it’s possible to get good predictions, what are the best types of questions to ask, given the tradeoff between importance and predict-ability?
  5. Is it possible to get good predictions from prediction markets, given the current state of regulations?

Discuss on the Effective Altruism Forum.