# How Much Leverage Should Altruists Use?

Cross-posted to the Effective Altruism Forum.

Last updated 2020-01-08.

## Summary

Philanthropic investors probably have greater risk tolerance than self-interested ones. Altruists can use leverage—borrowing money to invest—to increase the expected utility of their portfolios. They may wish to lever their portfolios at much higher ratios than self-interested investors—likely 2:1 to 3:1, and perhaps much higher (practical concerns notwithstanding).

Unlike normal investors, altruists care about reducing their correlations with other investors, so they should heavily tilt their portfolios toward uncorrelated assets.

This essay will discuss:

2. Basic arguments for using leverage
3. Appropriate levels of risk for altruists
4. The importance of uncorrelated assets, and where investors might be able to find them
5. Potential changes for philanthropic behavior

Disclaimer: I am not an investment advisor and this should not be taken as investment advice. Please do your own research or seek professional advice and otherwise take reasonable precautions before making any significant investment decisions.

Posted on

# Correction on Giving Now vs. Later

In an early draft of a paper, Philip Trammell points out two mistakes in my essay on giving now vs. later. And since writing it, I have identified a third mistake.

The first mistake: My model assumed that utility is logarithmic with income. Empirical research suggests that utility may be sub-logarithmic with income, which my model does not allow for. Accounting for this would make giving now look relatively better. (I do not think it is clearly true that utility is sub-logarithmic with income, but it could be, and it would be better to account for the possibility.)

The second mistake: My model compared the present-day discount rate with the present-day investment rate, but this is not the correct comparison. To see why, consider the possibility that the discount rate currently exceeds the investment rate, but that the discount rate is dropping (because good giving opportunities are drying up), and at some future time t, the investment rate will surpass and then permanently exceed the discount rate.

In this scenario, you will do more good by donating all your money today than by waiting until time t to donate. But if you continue investing for long enough after time t, the discounted present value of your donation will eventually surpass the value of donating today. Therefore, in this scenario you should give later, no matter how high the present-day discount rate may be.

Trammell discusses this scenario, including why the investment rate will eventually exceed the discount rate, in his paper. The paper is a draft, but as of this writing, the relevant discussion occurs in section 5.1. He discusses the argument at a high level in RPTP Is a Strong Reason to Consider Giving Later on the Effective Altruism Forum (and probably explains it better than I did).

The third mistake: My model ignored risk. We can only directly compare the investment rate r with the growth rate g using a simple inequality (namely, r > g) if r and g are perfectly correlated. If r and g move in lock step, our utility function over future spending can be simplified to directly compare r and g. But my model introduced modifications that could disrupt the correlation. It added discounts to global poverty interventions that don’t directly depend on the consumption level g, and it introduced factors such as valuation into the investment rate of return. These additional factors violate the assumption that the investment rate and discount rate have perfect correlation.

To account for risk, we cannot simply add up all the terms on each side. Instead, we need to do the hard(er) work of calculating the discounted expected utility of giving later relative to the utility of giving now.

Fixing these problems requires making substantial modifications to my model. Trammell’s paper covers the subject much more effectively than my essay did, so rather than updating my model, I will defer to his.

It is worth emphasizing that Trammell’s paper is still a draft—it contains some missing sections and may substantially change before publication—but I believe his general approach works much better than the one in my essay.

Posted on

# Are All Actions Impermissible Under Kantian Deontology?

Epistemic status: I don’t really understand Kantian deontology.

The classic footbridge dilemma, a variation on the trolley problem:

You see a trolley running down a track that has five people working on it. If the trolley hits them, it will kill them. You are standing on a footbridge overlooking the track, and a fat man stands next to you. If you push the fat man off the footbridge, then he will get crushed by the trolley, but his death will save the lives of the other five people. Should you push him?

A Kantian deontologist would say no. According to Kant, you must treat people as ends in themselves, not merely as means; so you should not use the man to save the other five. (For our purposes, this is equivalent to the intuitionist moral claim that killing is categorically wrong. This essay somewhat vacillates between describing Kantian and intuitionist deontology.)

Now consider an alternative, what we can call the footbridge phone booth dilemma:

You see a trolley running down a track that has five people working on it. If the trolley hits them, it will kill them. You are standing on a footbridge overlooking the track, and an opaque phone booth stands next to you. The phone booth may be out of commission, in which case it contains some concrete weighing it down; otherwise, there is a person inside the phone booth. If you push the phone booth off the footbridge and it contains either concrete or a person, it will stop the trolley, but anyone inside the phone booth will die1. Should you do it?

In this case, you might be using a person merely as a means, but you don’t know for sure because you don’t know if anyone is inside the phone booth.

A deontologist can claim one of two things about how to handle this new dilemma:

1. It is always wrong to push the phone booth as long as there is some nonzero probability that it contains a person.
2. There is some largest probability p of a person being inside the phone booth such that pushing the booth is permissible; at any higher probability, it is wrong to push the booth off the bridge.

(Or you could claim that pushing the phone booth is always permissible, but that is generally regarded as a consequentialist position, so I will not discuss it.)

## Position 1: All actions are impermissible

Perhaps it is always wrong to push the phone booth as long as there is some nonzero probability that it contains a person. That is, you ought not perform an action if that action has a nonzero probability of treating a person merely as a means rather than an end. Or, to use a more intuitive/less Kantian formulation, you ought not perform any action that has a nonzero probability of killing someone. But every action has nonzero probability of killing someone; therefore, all actions are impermissible.

Why is it true that every action has nonzero probability of killing someone? This follows from the fact that we should assign nonzero probability to every proposition.

Why should we assign nonzero probability to every proposition? Well, suppose you believe some proposition has probability 0. That means no amount of evidence or reasoning, no matter how strong, could ever convince you to change your mind. In formal terms, if C is a claim with zero probability of being true, and E is some evidence,

P(C|E) = P(C) \cdot \frac{P(E|C)}{P(E)}

P(C) = 0, which means P(C|E) must be 0 as well. For any statement, there must be some evidence that would convince you to update your beliefs. As Dennis Lindley wrote in Making Decisions:

[I]f a decision ­maker thinks something cannot be true and interprets this to mean it has zero probability, he will never be influenced by any data, which is surely absurd. So leave a little probability for the moon being made of green cheese; it can be as small as one in a million, but have it there since otherwise an army of astronauts returning with samples of the said cheese will leave you unmoved.

For more on this, read How to Convince Me That 2 + 2 = 3 and Infinite Certainty from LessWrong.

Alternatively, one might object on the basis of the definition of killing, which, one might argue, requires an intention to cause a person to die. Even if all actions might cause someone to die, that’s not the same as killing, and might not necessarily be impermissible. I would respond by returning to the phone booth dilemma. You do not know whether the phone booth contains a person, but you know it will stop the trolley; your intention is only to save the five workers on the track. In Kantian terms, you do not know whether you are treating someone merely as a means. Nonetheless, Position 1 asserts that pushing the phone booth is wrong. If that can be wrong even though you have no intention to cause someone to die, then it is similarly wrong to take an action that entails causing someone to die.

One could go a step further and claim that causing someone to die is only categorically wrong if the desired outcome of your action will come about only if the person dies—basically falling back to the Kantian notion that it is wrong to treat people merely as means. But it is still true that, for any action you take with a desired outcome, there is some nonzero probability that the outcome will come about only if someone dies. The phone booth dilemma demonstrates there could exist a situation in which you probabilistically cause someone’s death, which is categorically wrong according to Position 1; if the situation can exist, then any action could result in this situation with nonzero (if tiny) probability, and therefore all actions are morally wrong.

## Position 2: Deontology reduces to consequentialism

Suppose the phone booth has some probability of containing a person. Let p be the largest such probability at which it is permissible to push the phone booth off the bridge.

Let’s talk about my favorite theorem: the Von Neumann-Morgenstern utility theorem, or VNM for short. VNM states that if we accept four axioms (Completeness, Transitivity, Continuity, and Independence), then there exists a utility function such that we best satisfy our values by maximizing the expected value of that function. Three of these axioms are uncontroversial2, but deontology traditionally entails rejecting the axiom of continuity, which states that for three outcomes L, M, and N:

If $L \le M \le N$, then there exists a probability p such that $pL + (1 - p)N \sim M$

(where the ~ symbol indicates that you are indifferent between the two choices.)

Deontologists typically would claim that it is possible for L to be categorically worse than M, such that it is never worth accepting any probability of L, no matter how small. (This is consistent with Position 1 above.)

In the case of the phone booth dilemma, assume the following. L = pushing the phone booth when it contains a person N = pushing the phone booth when it contains concrete M = not pushing the phone booth at all.

From this, we can see that accepting Position 2 requires accepting Continuity. If we accept the other three VNM axioms, then the theorem holds: morality entails maximizing the expected value of some utility function.

Note that, despite the terminology, just because we have a utility function doesn’t mean utilitarianism is true. Utilitarianism necessitates a particular type of utility function where utility is defined as the aggregate well-being minus suffering of all creatures. But VNM does entail consequentialism, because the only thing we care about is consequences—specifically, consequences as measured by the VNM utility function.

## Conclusion

When we attempt to resolve the footbridge phone booth dilemma from a deontological perspective, we must take one of two positions. Position 2 reduces to consequentialism, so ultimately Position 1 is the only claim a deontologist can make.

But if this position does not allow us to take any action that has a nonzero probability of causing an impermissible outcome (such as killing someone)3, and we can never be absolutely certain that an action will not result in such an action, then all actions are impermissible.

The footbridge phone booth dilemma demonstrates the impermissibility of all actions for Kantian deontology as well as intuitionist “killing is wrong” deontology. The same reasoning applies to any form of ethics that creates categorical prescriptions. This even applies to mixed forms of consequentialism and deontology. For example, if we say that you ought to save as many lives as possible, but with the restriction that you must never cause anyone to die, we still run into the problem where any action has some probability of resulting in a death. For any ethical system that asserts that an outcome is categorically wrong, all actions are impermissible, because all actions have some probability of causing such an outcome.

A system that declares all actions impermissible cannot prescribe actions, and thus fails as an ethical theory.

# Notes

1. We must stipulate that a phone booth without concrete or a person in it is not heavy enough to stop the trolley. If the phone booth by itself can stop the trolley, then we are not using the person inside the booth as a means to stop the trolley, so the act of pushing might not be wrong according to Kant. Whether it is wrong doesn’t matter for our purposes; what we care about is that in the scenario, the (potential) person in the phone booth is being treated merely as a means.

2. I know people who reject Independence of Irrelevant Alternatives (IIA), which is conceptually related to but not the same as the VNM Axiom of Independence. You can reject the former while still accepting the latter. (IIA is irrelevant to VNM because VNM assumes that you have a fixed set of choices.) I have never heard of anyone seriously rejecting any of the VNM axioms other than Continuity, although admittedly I don’t know as much as I could about the subject.

3. The distinction between deontology and consequentialism often is described as follows: deontology concerns actions/intentions, while consequentialism cares about consequences It may seem strange that I am talking about outcomes, but it still makes sense to speak of impermissible outcomes under deontology. For a deontological theory to make claims about the permissibility of actions, it must examine outcomes. Moral rules such as “do not kill” make absolute prescriptions on actions with respect to the outcomes that those actions produce. An action is wrong if it results in the outcome of a person dying (and when the circumstances of the action fit what we mean by “killing,” as distinct from merely causing someone to die). When I say “impermissible outcome” I mean an outcome such that, if an action would produce that outcome with probability 1, the action is impermissible.

Posted on

# New Page: Convert Credences into a Bet

https://mdickens.me/credence-bet/

In response to a Facebook post, I created a page to make it easy to make bets with people. If two people disagree about a claim and they want to bet on it, they can use this form to calculate how much money each person should bet. Each person should input their best estimate of the probability of the claim being true, and the form will tell them how much to bet. The form ensures that the bet will be fair for both participants–they both expect to win the same amount of money.

Posted on

# What Are the Best TV Shows (According to IMDb Episode Ratings)?

Recently, I was browsing IMDb’s list of top-rated TV shows:

According to IMDb ratings, Planet Earth II is the second-best TV show of all time, with 9.5 stars out of 10. But if you look at the ratings of each individual episode, they range from 6.8 to 7.91:

In general, the rating of a TV show usually differs from the average rating of that show’s episodes. What does the list of top TV shows look like if we sort by average episode rating instead of show rating? Perhaps voters have different motivations when they’re rating shows than when they’re rating individual episodes, and it could be interesting to see how the ratings differ.

Posted on

# High School Science Experiments

Experiments are a critical part of science—perhaps even the central feature. But middle school and high school science experiments don’t teach students how experiments are supposed to work.

The way I did science experiments in school went something like this:

1. Learn about some natural phenomenon.
2. Teacher explains an experiment intended to test the natural phenomenon or at least vaguely relate to it.
3. We run the experiment, with the ostensible goal of observing the natural phenomenon.
4. We get results totally different from what the laws of nature predict.
5. Whatever, let’s move on to the next subject.

For example, I remember in physics class we learned about how acceleration due to gravity changes when an object rolls down an incline based on the steepness of the incline. Then we did an “experiment” to test this by rolling marbles down inclines and measuring how far they got in a fixed amount of time. The results we got were inconsistent with the laws of mechanics, but nobody questioned ths. We all assumed that our experiment was not sufficiently well-controlled to produce reliable results (which was accurate).

This is the antithesis of how experiments are supposed to work. The point of running an experiment is to learn something about the world. Experiments should be well controlled so you can be confident that you are learning something.

Running a good experiment is not easy. Experiments can easily fail to produce good results, so they must be designed carefully. Designing good experiments is a skill. And the way experiments are done in school does nothing to teach this skill.

If you know in advance that you have bad methodology and you’re going to throw away the results of your experiment, what’s the point? Experiments as they are done in school don’t teach about natural laws (because you ignore whatever results you get), and they don’t teach how to design good experiments (because no effort is made to produce consistent results).

I can imagine an effective science class that focused on teaching students how to design experiments. You could perhaps start by providing students a simple natural law, such as an object’s acceleration on an inclined plane, then challenge them to produce an experiment that replicates the results. If they don’t produce consistent results, push them to figure out why, and refine their experimental conditions until they can get reliable measurements.

But the point of an experiment isn’t (usually) to reproduce known results—it’s to figure out something unknown. A good experiment should be able to falsify a hypothesis; you shouldn’t just keep changing your experiment until you get the expected results. (The process I described in the previous paragraph is basically P-hacking.) I don’t know how you would teach people to get from “design an experiment that can consistently replicate a known natural law” to “design an experiment that can tell you something you don’t already know, and be confident that it’s correct.” But I’ve only been thinking about it for a few minutes. We are collectively wasting tens of millions of hours per year having students run experiments while learning nothing about how to run experiments, and I’m sure we can do better.

Let me throw out a slightly more sophisticated idea for how to teach experiments. Give students a natural phenomenon to investigate; it should be something they probably don’t already know (so they don’t know what result to expect), but that isn’t too hard to test. Divide the students into groups and have them design and implement experiments to figure out the phenomenon. Then challenge them to peer review each other’s experiments and look for flaws. Refine the experiments until most of the class agrees on the correct methodology and can replicate each other’s results.

This also provides a natural way to teach students statistics. If you need to develop good experimental methodologies, you need to have a way of knowing how reliable your results are and how many trials to run. Some students will try to understand how to do this, and as they begin to think more deeply about it, they will inevitably ask the same questions that inferential statistics is meant to answer. This is the perfect time to equip them with some statistics knowledge that they can use to improve their understanding of science.

I’m tempted to get overzealous about how significant it would be if we consistently ran science classes this way. I would like to say that it would solve the replication crisis, bring an end to shoddy news reporting, and revolutionize politics. Probably none of that would happen, and maybe this whole thing isn’t even a good idea. I’m just theorizing, I haven’t tested any of these ideas experimentally.

Posted on

# How Can Donors Incentivize Good Predictions on Important but Unpopular Topics?

Altruists often would like to get good predictions on questions that don’t necessarily have great market significance. For example:

• Will a replication of a study of cash transfers show similar results?
• How much money will GiveWell move in the next five years?
• If cultured meat were price-competitive, what percent of consumers would prefer to buy it over conventional meat?

If a donor would like to give money to help make better predictions, how can they do that?

You can’t just pay people to make predictions, because there’s no incentive for their predictions to actually be accurate and well-calibrated. One step better would be to pay out only if their predictions are correct, but that still incentivizes people who may be uninformed to make predictions because there’s no downside to being wrong.

Another idea is to offer to make large bets, so that your counterparty can make a lot of money for being right, but they also want to avoid being wrong. That would incentivize people to actually do research and figure out how to make money off of betting against you. This idea, however, doesn’t necessarily give you great probability estimates because you still have to pick a probability at which to offer a bet. For example, if you offer to make a large bet at 50% odds and someone takes you up on it, then that could mean they believe the true probability is 60% or 99%, and you don’t have any great way of knowing which.

You could get around this by offering lots of bets at varying odds on the same question. That would technically work, but it’s probably a lot more expensive than necessary. A slightly cheaper method would be to determine the “true” probability estimate by binary search: offer to bet either side at 50%; if someone takes the “yes” side, offer again at 75%; if they then take the “no” side, offer at 62.5%; continue until you have reached satisfactory precision. This is still pretty expensive.

In theory, if you create a prediction market, people will be willing to bet lots of money whenever they think they can outperform the market. You might be able to start up an accurate prediction market by seeding it with your own predictions; then savvy newcomers will come and bet with you; then even savvier investors will come and bet with them; and the predictions will get more and more accurate. I’m not sure that’s how it would work out in practice. And anyway, the biggest problem with this approach is that (in the US and the UK) prediction markets are heavily restricted because they’re considered similar to gambling. I’m not well-informed about the theory or practice of prediction markets, so there might be clever ways of incentivizing good predictions that I don’t know about.

Anthony Aguirre (co-founder of Metaculus, a website for making predictions), proposed paying people based on their track record: people with a history of making good predictions get paid to make more predictions. This incentivizes people to establish and maintain a track record of making good predictions, even though they don’t get paid directly for accurate predictions per se.

Aguirre has said that Metaculus may implement this incentive structure at some point in the future. I would be interested to see how it plays out and whether it turns out to be a useful engine for generating good predictions.

One practical option, which goes back to the first idea I mentioned, is to pay a group of good forecasters like the Good Judgment Project (GJP). In theory, they don’t have a strong incentive to make good predictions, but they did win IARPA’s 2013 forecasting contest, so in practice it seems to work. I haven’t looked into how exactly to get predictions from GJP, but it might be a reasonable way of converting money into knowledge.

Based on my limited research, it looks like donors may be able to incentivize donations reasonably effectively with a consulting service like GJP, or perhaps by doing something involving predictions markets, although I’m not sure what. I still have some big open questions:

1. What is the best way to get good predictions?
2. How much does a good prediction cost? How does the cost vary with the type of prediction? With the accuracy and precision?
3. How accurate can predictions be? What about relatively long-term predictions?
4. Assuming it’s possible to get good predictions, what are the best types of questions to ask, given the tradeoff between importance and predict-ability?
5. Is it possible to get good predictions from prediction markets, given the current state of regulations?

Discuss on the Effective Altruism Forum.

Posted on

# Should Global Poverty Donors Give Now or Later?

Update 2020-01-04: This essay contains a number of important mistakes. See Correction on Giving Now vs. Later.

Disclaimer: I am not an investment advisor and nothing in this essay serves as investment advice.

# Introduction

Robin Hanson: If More Now, Less Later

The rate of return on investment historically has been higher than the growth rate–or, as they say, r > g. If you save your money to donate later, you can earn enough interest on it that you eventually have the funds to donate a greater amount. Because r > g, you should invest your money for as long as you can before donating1–or so the argument goes.

Traditionally, we’d apply a discount rate of g to future donations, because that’s the rate at which people get richer and therefore the rate at which money becomes less valuable for them. But this ignores some important factors that affect how much we should discount future donations, and we can create a much more detailed estimate. This essay will explore that in detail. Exactly what factors determine the investment rate of return and the discount rate on poverty alleviation? Can we gain any information about which is likely greater?

Posted on

# Why Do Small Donors Give Now, But Large Donors Give Later?

Some people have observed that small and large donors follow different giving patterns. Small donors who give out of their salary—that is, most people—tend to donate money more or less as soon as they earn it (usually within a year). Large donors—e.g., extremely wealthy people and foundations—tend to slowly distribute their money and hold on to most of it1. For example, large foundations typically donate little more than the legally required 5% of assets each year. Why do they behave differently?

I don’t believe this difference is surprising, and actually it’s not really even a difference.