Newcomb's Problem and Speculative Bubbles


Newcomb’s problem:

A superintelligence from another galaxy, whom we shall call Omega, comes to Earth and sets about playing a strange little game. In this game, Omega selects a human being, sets down two boxes in front of them, and flies away.

  • Box A is transparent and contains a thousand dollars.
  • Box B is opaque, and contains either a million dollars, or nothing.

You can take both boxes, or take only box B.

And the twist is that Omega has put a million dollars in box B [if and only if] Omega has predicted that you will take only box B.

Omega has been correct on each of 100 observed occasions so far - everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars. (We assume that box A vanishes in a puff of smoke if you take only box B; no one else can take box A afterward.)

Before you make your choice, Omega has flown off and moved on to its next game. Box B is already empty or already full.

Omega drops two boxes on the ground in front of you and flies off.

Do you take both boxes, or only box B?

I imagine Omega looks something like this.

The answer is that you should only take box B. All the interesting conclusions of this essay depend on that assumption, so if you think you should take two boxes, you definitely won’t agree with this essay’s conclusions (but you might still find them interesting).

You might not have realized it, but you can actually play a version of Omega’s game right now.

The stock market is Omega, buying the next big company is Newcomb’s problem, and you should (metaphorically) take only one box.

The stock market Newcomb’s problem

Consider a modified (and somewhat sadistic) version of Newcomb’s problem:

Omega sets a single box in front of you and then flies away. It’s opaque, and contains either a million dollars or nothing.

You can choose to either pay $1000 and take the box, or pay nothing and not take the box.

Omega will put a million dollars into the box if and only if it has predicted that you will not take it.

Do you take the box or not?

If you should take only one box in the original Newcomb’s problem, then in this modified version you should not take the box for the same reason. If you take it, you pay $1000 and Omega successfully predicts that you’ll take it, so you get nothing and you’re out $1000.

Buying the next hottest stock works pretty much the same way.

If you don’t take the box (a.k.a. the latest hot stock), it may contain tons of money that you didn’t get. If you do take it, you get nothing. In other words, hot speculative assets will only make lots of money if you don’t buy them.

I just made a pretty outlandish claim. It actually follows straightforwardly from the efficient market hypothesis (EMH), but it will take some work to fully explain.

According to EMH, the current price of an asset already reflects all available information about that asset. A lot of people know everything you know about markets plus more. If you have some knowledge that leads you to believe that an asset’s price is going to go up, other investors already know the same things you do and they already drove the price up.

The market has really strong forces pushing toward making it efficient. The finance industry employs tens (hundreds?) of thousands of people trying really hard to find mispriced assets, who then buy or sell those assets until they’re not mispriced anymore. Anything you learn about the market is going to be old news for them.1

You can only buy at the height of the bubble

Say you are considering buying the latest big company, Bubbles Inc.

They probably manufacture bubble blower toys or something.

Suppose the company will have a big price run-up. If you can buy in early, you get to participate in the run-up and potentially make a lot of money. If you buy near the peak, you’ll just lose most of your investment.

Pretty much by definition, most people invest near the peak–prices rise when there’s more demand, which means more people trying to buy. If you randomly pick a person who’s buying stock in Bubbles Inc., chances are good they’re buying near the top. And thanks to our friend the efficient market hypothesis, you cannot reliably buy into the stock early on, before it has a big run-up.

Suppose you consider buying into Bubbles Inc. but decide not to, and then the price increases 10x. You might say, oh, I should have bought some, I would have made so much money! But in fact, you could not have bought Bubbles Inc. and still made money.

Counterfactual stock purchases and Newcomb’s problem

When you play the modified Newcomb’s problem described above, you will choose not to take the box and then, after not taking it, see that it contains $1 million. You might be tempted to say, “If only I had taken the box, I would have gotten a million dollars!” That would be true under normal circumstances. But in this game, you only get the money if Omega predicts that you don’t take the box. If you had taken the box, Omega would have predicted you would take the box, and you wouldn’t have gotten the money. There does not exist any counterfactual situation where you take the box and get the million dollars.

Bubbles inc. works the same way. You cannot reliably predict that Bubbles Inc. will have a big run-up (although you might occasionally get it right by sheer chance2). Suppose you were considering buying Bubbles Inc.–you had some reason to expect the price to increase a lot–but then you decided not to, and afterward, the stock made huge returns. If you had acted on that reason, the market would have known you were going to do that (in other words, all the people with more information than you would have acted on the same reason) and the price would have gone up before you ever bought any Bubbles Inc. stock. When you see Omega’s box with $1 million, you should not regret that you didn’t take it; for the same reason, when you see a stock’s price skyrocket, you should not regret that you didn’t buy it.

Mental algorithms

We can understand why the market behaves this way by thinking in terms of mental algorithms. When you do or don’t buy a stock, you run through some cognitive process to make that decision. Thanks to the efficient market hypothesis, you have to assume that lots of other people are running the same or better processes. That means if you decide not to buy a stock, other people’s algorithms might agree with yours and they won’t buy either. But if your mental algorithm does decide to buy, other people will decide the same thing (including really smart people with lots of money, like hedge funds), and they will make their decisions faster than you do. You can’t say “oh, if only I had bought Bubbles Inc. a year ago, I would have made tons of money!”: if your mental algorithm had said to buy it, so would lots of other people’s, and they would have done it before you.

Can you regret making the wrong choice if you chose randomly?

I can think of one situation where you perhaps could have bought Bubbles Inc. before most people. If you chose to buy it purely randomly, then perhaps everyone before you could randomly choose not to buy, and you could end up making a lot of money. If you randomly choose not to buy, perhaps you can genuinely say, “I regret not buying Bubbles Inc., because I know that even if I had, other people still wouldn’t have.”

Perhaps in this situation, you genuinely could have chose differently and taken advantage of Bubbles Inc.’s subsequent rise without the market preemptively taking away your opportunity. But it only works because you’re running a bad algorithm. You definitely shouldn’t buy stocks randomly. On average you will certainly do no better than the market, and on a risk-adjusted perspective you’ll do worse because by choosing stocks at random you’re losing diversification without gaining any expected return. So the only way you can perhaps regret having not bought Bubbles Inc. is if you’re running a bad algorithm.

Real life

Fairly regularly, I hear friends talking about how they considered buying bitcoin five years ago but didn’t. They regret not buying it because they believe they could’ve made lots of money if they had. (As of this writing, bitcoin has returned about 500x over the past five years.)

Except if they had bought bitcoin five years ago, it wouldn’t have gotten a 500x return. If they had had some good reason to buy bitcoin, lots of other people would have had the same reason, and the price would have already risen before they decided to buy any.

That’s not to say nobody took advantage of that 500x return–clearly some people did, because some people held bitcoin in 2012. But very few people did, and nobody could have done so for any reason other than sheer luck3. In 2012, there was no available information reliably demonstrating that the price of bitcoin would increase by a lot–if there were, the smart money would have already driven the price up. If you had had any information that would lead you to buy bitcoin, hedge funds would have had it, too. Much like Omega, the market knows what you’re going to do, and it will stop you from getting any special advantage.


  1. I expect that for some readers, EMH will be a significant point of disagreement. It would take more time than we have here to give a full justification of why I believe that markets are efficient (with some exceptions). For further reading, see: Efficient Capital Markets by Eugene Fama; A Random Walk Down Wall Street by Burton Malkiel; and Common Sense on Mutual Funds by John Bogle.

  2. Making lots of money by chance doesn’t conflict with EMH. EMH says that no strategy can have a higher expected risk-adjusted return than the broad market. You might still do better by sheer luck–but you could just as easily to do worse.

  3. Maybe trading firms could have, but people like you and I couldn’t. Perhaps some firms have sophisticated models of human behavior where they can predict better than chance the probability that an asset (such as bitcoin, or beanie babies, or black lotus in 1993) will become far more popular in the future. I have seen many ordinary people claim that they can do this but I have yet to see any compelling evidence.

Posted on

A Serious Problem with Quantitative Estimates

All expected value distributions must either (1) have a mean of infinity, or (2) have such thin tails that you cannot ever reasonably expect to see extreme values. When we’re estimating the utility distribution of an intervention, both of these options are bad.

Note: For this essay I will assume that actions cannot produce negative utility. If you start letting actions have negative value, the reasoning gets way more complicated. Maybe that’s an essay for another time.

Infinite ethics make everything bad

The possibility of infinitely good actions poses a substantial challenge to ethics. In short, if an action has nonzero probability of producing infinite value, then it has infinite expected value. Once actions have infinite expected value, we can’t rank them or even reason about them in any meaningful way. (Nick Bostrom discusses this and other problems in his paper, Infinite Ethics.)

There exists a very real possibility that our universe is infinitely large, which would pose serious problems for ethical decisions. And, less likely but still not impossible, some actions we take could have some probability producing infinite utility. I can get myself to sleep at night by hand-waving this away, saying that I’ll just assume the universe is finite and all our actions have zero probability of creating infinite value. I don’t like this “solution”, but at least it means I can ignore the problem without feeling too bad about it.

But there’s another problem that I don’t think I can ignore.

Expected value distributions make everything much worse

You’re considering taking some action (such as donating to charity). You have (explicitly or implicitly) some probability distribution over the utility of that action. Either your distribution has infinite expected value, or it has finite expected value.

If your utility distribution has infinite expected value, that’s bad. Infinite expected values ruin everything.

If your utility distribution has finite expected value, that means according to your distribution, the probability of the action producing X utility must be strictly less than 1/X. If the action followed a Pareto distribution with parameter alpha = 1, that would mean the probability of producing X utility would be exactly 1/X, and such a distribution has infinite expected value. So any finite-expected-value distribution must have a thinner tail than a Pareto distribution with alpha=1.

(If you’re unfamiliar, you don’t need to worry about what it means for a Pareto distribution to have alpha=1. Suffice it to say that Pareto distributions may have fatter or thinner tails, and the value of alpha determines the thinness of the tail (larger alpha means thinner tail). alpha=1 is the point at which the expected value of the distribution becomes infinite.)

When you believe that an action’s utility distribution has thinner tails than an alpha=1 Pareto distribution, you run into the Charity Doomsday Argument. If you discover some evidence that the action has much higher expected value than you thought, you are pretty much compelled by your prior distribution to stubbornly disbelieve the evidence:

To really flesh out the strangely strong conclusion of these priors, suppose that we lived to see spacecraft intensively colonize the galaxy. There would be a detailed history leading up to this outcome, technical blueprints and experiments supporting the existence of the relevant technologies, radio communication and travelers from other star systems, etc. This would be a lot of evidence by normal standards, but the [finite expected value] priors would never let us believe our own eyes: someone who really held a prior like that would conclude they had gone insane or that some conspiracy was faking the evidence.

Yet if I lived through an era of space colonization, I think I could be convinced that it was real. […] So a prior which says that space colonization is essentially impossible does not accurately characterize our beliefs.

If you put a prior of 10-50 on having an expected value of 1048 (the exact numbers don’t matter too much), you require extraordinary un-meetable levels of evidence to ever convince yourself that you could ever actually have that much impact.

We must either accept that an action has infinite expected value; or we must give it a “stubborn” distribution that massively discounts any evidence that would substantially change our beliefs.

This actually matters

You could say that this concern is merely academic. In practice we don’t formally produce utility distributions for all our actions and then calculate their expected values. And while it’s true that we don’t often use utility distributions explicitly, we still use them implicitly. We could take our beliefs and produce a corresponding utility distribution. And this distribution must either have a thin tail, in which case it’s too stubborn, or it must have a fat tail, in which case its expected utility is infinity.

What to do?

Let’s look at a few possible resolutions. Everything beyond this point is highly speculative and probably useless.

Resolution 1: Accept the infinite distribution

In some cases, people generally behave consistently with assuming a fat-tailed distribution1, in that they readily update on evidence, even when it points to a dramatically different result than what they previously believed. For example, I don’t know anyone who refuses to believe that the known universe contains 1024 stars.

That doesn’t mean we should actually use distributions with infinite expected value, although it hints that maybe we can find a way to use them. Even if we don’t ever use infinite distributions, infinite ethics still poses problems that we need to solve. If we can solve those problems then perhaps we can freely use fat-tailed distributions.

How well can we actually solve them? So far we don’t have any satisfying solutions. Adam Jonsson recently wrote something in the direction of a solution (see also this informal discussion of the result) but we still have a lot of unresolved concerns. But maybe if we can figure out these issues surrounding infinite ethics then we can accept that actions have infinite expected value2.

Resolution 2: Accept the finite distribution

The Charity Doomsday Argument is merely counterintuitive, not ethics-destroyingly decisionmaking-ruiningly terrible. So perhaps it’s the better choice.

In some cases, people behave more consistently with assuming a thin-tailed distribution. For instance, in Pascal’s mugging, people refuse to believe the evidence presented by the mugger. Some people disbelieve the astronomical waste argument for similar reasons, although others disagree.

Resolution 3: Reject utilitarianism

The problems with utility distributions don’t matter if we don’t care about utility, right? Problem solved! Except that non-utilitarianism has tons of other problems.

Resolution 4: Follow your common sense

a.k.a. the Epica solution

In practice, we often have no difficulty saying we prefer one action to another. Perhaps we could argue that we’re more confident about this than we are about some abstract claims about probability distributions. If our math gives bad results, then maybe something’s wrong with the math–maybe it doesn’t really apply to the real world.

But lack of rigor makes me uncomfortable. If we have a good reasoning process, we can probably formalize it somehow. The problems raised by this essay demonstrate that I have no idea how to formalize my reasoning. I don’t want to just follow “common sense” because I have so many biases that I can’t properly trust my instincts. So I don’t feel great about resolution #4, even though it’s the only one that appears to work at all.

Once again we have failed to solve any problems but only descended deeper into a hopeless spiral of indecision.


Thanks to Jake McKinnon, Buck Schlegeris, and Ben Weinstein-Raun for reading drafts of this essay.

  1. “Fat-tailed” is a relative term. Usually any Pareto distribution is considered fat-tailed, even if it has finite expected value. Here I use “fat-tailed” to refer only to distributions with infinite expected value.

  2. Finite distributions with infinite expected values might actually be easier to deal with than some other problems in infinite ethics. I haven’t investigated this, but one thing that comes to mind is we could say a distribution A is better than distribution B if, for all n, the integral from 0 to n of A is greater than the integral from 0 to n of B. (That obviously doesn’t solve everything but it’s a start.)

Posted on

New Comment System

I have removed Disqus and replaced it with built-in static comments. Disqus comments are disabled, but still visible on any old posts1. New posts going forward will only use the new static comment system.

I had been wanting to switch off Disqus for a while. It has a few disadvantages:

  1. I have no control over comments except for the moderation tools Disqus provides.
  2. I have no control over how comments are displayed.
  3. I don’t know what Disqus is doing or might do with commenters’ personal information.

The new comment system does exactly what I want it to do and nothing more.

Edited to add: If anyone’s interested, I’m using the Jekyll Static Comments plugin by Matt Palmer, with a few personal modifications.


  1. A lot of old posts don’t have any comments as of this writing, so I removed Disqus from those posts. I left the Disqus comment section only on posts that actually had comments.

Posted on

An Idea for How Large Donors Can Support Small Charities

Epistemic status: I just thought of this right before writing this essay and I haven’t talked to anyone about it or really thought it through.

“Large donor” is code for The Open Philanthropy Project, because it’s the only large donor where anyone working there has a >1% chance of reading this.

Large donors generally don’t want to provide too much funding to small organizations. Providing too much funding can lead to various problems1:

  • It can cause the organization to care too much about the opinions of a single large donor.
  • It can make the donor look excessively responsible for the organization’s behavior.
  • It harms the organization if the large donor ever withdraws funding.

Open Phil has attempted to avoid these problems by providing fixed-size grants that do not represent more than about a third (usually) of the recipient’s total budget.

Suppose you’re a large donor and you want to support a relatively small charity. You don’t want to be responsible for more than about a third of its budget, and it currently has $1 million. Then you could make a $1.5 million grant to be paid out in $500,000 chunks over the next three years–that would be a pretty typical strategy.

I have an idea for another way to do this that I believe provides better incentives to the charity and to other donors. Instead of making a fixed-size grant, commit to providing funding over the next three years, but make the grant sizes dependent on the organization’s budget. So instead of granting $500K per year, you grant an amount equal to 1/2 of the incoming revenue over the prior year.

For example: if a charity raises $1 million in 2016, you make a $500K grant at the beginning of 2017. Then, if the charity raises $1.2 million in 2017, you donate $600K at the beginning of 2018.

Continue reading
Posted on

Do Investors Put Too Much Stock in the U.S.?

Summary: Investment advisors typically recommend that you put somewhere between 50% and 75% of your stock investments into US stocks and the rest into international markets. Most individual investors have 70% or more of their stock money in the US or their home country, a phenomenon that’s aptly called home country bias. But there are reasons to believe that even 50% is too much, and most people should really have more like 0-30% of their stock investments in the United States12.

Disclaimer: I am not an investment advisor and this should not be taken as investment advice. Please do your own research or seek professional advice and otherwise take reasonable precautions before making any significant investment decisions.

The global market portfolio

Let’s start from the efficient market hypothesis3. That means the prices of financial assets fully reflect their intrinsic value. In an efficient market, there’s only one “free lunch”: diversification. If you don’t own a globally diversified portfolio, you’re taking on unsystematic risk, and you could reduce your risk by diversifying.

If we take this to the limit, how do we get the most diverse possible portfolio? Simple: buy some of every asset in the world. In particular, buy each asset in proportion to its total market value. That would give your portfolio something like 20% US stocks, 20% international stocks, 50% bonds, and 10% real assets such as gold and real estate.

If we look at just the stock portion of your portfolio, you’d have 50% in US stocks and 50% in foreign stocks (as of early 2017–these percentages will likely change over time). So if we simply follow the efficient market hypothesis, we should aim for about a 50/50 split.

Not everyone buys this. Here’s a quote from the popular personal finance blogger Mr. Money Mustache:

What about International stocks? Some people like to get fancy and buy international index funds, which can do well when the US is hurting (as it has been recently). This is fine, as long as you understand that it’s just another form of trying to outsmart the basic stock index. When you do this, you are stating that you believe the stock markets of the other countries are more undervalued relative to future growth, than the US market is.

But that’s exactly the opposite of how it should work! You don’t start with 100% investment in US stocks and then require justification for diversifying globally. If you accept the efficient market hypothesis, you should start with the global market portfolio and demand a justification for deviating from that–and a disproportionate allocation to US stocks qualifies as a deviation. You should buy the whole world instead of privileging the United States over other countries.

Why you should overweight international stocks

If you buy the global market portfolio, you will have about half your stocks in the US and half in other countries. But we have several convincing reasons for decreasing our US allocation and buying more stocks in foreign countries.

Continue reading
Posted on

Ideas Too Short for Essays

Ever since I’ve been writing essays, I have always accumulated many more essay ideas than I end up actually writing. I frequently have an idea, write a paragraph or two, and then realize I have nothing left to say. Rather than leaving these unpublished, I am trying an experiment. This post contains a compilation of some of these ideas that were too short for essays.

In this issue, we discuss:

  • putting your money where your mouth is
  • the linguistics of swear words
  • cause prioritization
  • religion
  • more cause prioritization, because that’s obviously been my favorite topic recently

People should make bets on charities

It’s really important that we make good decisions about which charities to support and how to direct our altruistic endeavours. Unlike in the for-profit sector, we have no direct incentives to make optimal donation decisions1, so we have to hope that we come up with the right thing to do just by trying really hard.

Sometimes we can improve our incentives by putting our money where our mouths are. We obviously can’t make bets about which charity will do the most good because we have no direct way to measure that, but we can make related predictions. Some examples of useful bets one could make:

People should make bets because:

  1. If you make bets on outcomes that relate to charities’ effectiveness, you have direct personal incentives to get the answers correct. I bet with Nick Beckstead on clean meat technology because I believe he’s overly pessimistic, and I want him to have a personal stake in the matter since he has the power to provide the field with millions of dollars of funding. At the same time, I’m funding an organization that supports clean meat (among other things), so I should have a personal stake in the matter as well.
  2. Winning a bet provides (weak) evidence that you have a stronger ability to make predictions. If donors make bets with each other, over time the best predictors will make the most money. Better predictors probably make better donation decisions on average (although other factors, like personal values, matter a lot too), which means money gets donated more effectively overall.

On “Bullshit”

As far as I can tell, “bullshit” is the only swear word with a unique meaning.

Take the word “fuck”. There are certainly situations where you’d rather use the word “fuck” than any other word because it’s more emphatic; but you could always replace it with another word without much changing the meaning of the statement. When you’re using it as a verb you can usually replace it with “screw”. If you use “fucking” for emphasis, you can replace it with “freaking” or “damn” or “bloody”. A lot of times “fuck” merely serves as a stronger version of an alternative phrase, e.g., “What the hell?” -> “What the fuck?” or “I don’t give a damn” -> “I don’t give a fuck”.

But “bullshit” refers to a certain specific type of lying and there’s no other word that means the same thing. In Harry Frankfurt’s book “On Bullshit”, he initially talks about “bullshit” and then says in the interest of professionalism he will refer to it as “humbug” instead. But if he had a book called “On Humbug”, nobody would know what he was talking about because humbug is a weird word that nobody uses (except for Scrooge, I guess). Heck, he wrote a professional work of philosophy that’s titled “On Bullshit” because that’s the best way to describe what the book is about2. In contrast, there’s no reason why a philosophy book ever needs to say “fuck” in the title (unless it’s about the work “fuck” itself).

Some notes I took at Effective Altruism Global 2016

These are comments various other people made that I found insightful.

  • People can become influential by being the first to make a novel attempt at something, even if the attempt is rough.
  • Many effective organizations don’t need funding right now because they’re scaling up, but they will need more funding in 2-3 years.
  • The best way to debias is to practice.
  • We need to improve the quality of questions but we haven’t figured out how.
  • At large events, we should assign a designated critic who is responsible for criticizing everyone else’s ideas.
  • It’s likely easier to shift the probability of outcomes with near 0.5 probability than outcomes with probability close to 0 or 1.

On religious agnosticism

Some people say that no one should call themselves atheists–they should call themselves agnostics, because they can’t know for sure that there is no god.

When I say “There is no god”, I don’t know that there’s no god, in the sense that I’m not 100% certain that I’m correct. But I shouldn’t feel the need to tell anyone that, because of course I’m not 100% certain. When I say “I had cereal for breakfast this morning”, I’m not totally certain that I did: maybe I’m misremembering, or maybe what I thought was cereal was actually really weird tiny waffles. There is no need to qualify my statement as “Perhaps I had cereal for breakfast this morning, but no one can ever truly know such a thing.” For the same reasons, I can reasonably claim without qualification that that there is no god (and especially that the gods of the major religions do not exist).

You should give either all or none of your money to mainstream politics

Altruists should maximize expected value. That means we shouldn’t diversify our donations for the sake of reducing risk; we should only diversify if we can extract more expected value by donating to multiple causes, perhaps because we fund one cause until it has enough funding and no longer looks like the best place to donate, and then switch to a different cause. (I discuss similar ideas in a previous essay.)

If you have lots of money, you might want to give to multiple causes because you fill the most important funding gaps in one cause and then move on to the next one. But if you can’t fill the funding gap for the best cause, you should give all your money to it.

Some people give money to mainstream politics because they believe it’s the most effective cause. Some people give money to mainstream politics and also to global poverty charities like GiveDirectly with lots of ability to absorb funding. But mainstream politics3 also has lots of ability to absorb funding. If you prefer donating to political campaigns over GiveDirectly, it’s really unlikely that you could donate enough money to politics to make GiveDirectly look better on margin, and vice versa. Therefore, if you want to do the most good with your donations, you should probably only donate to one or the other, not both4.


  1. Or decisions about which charities to work for, or to volunteer for, or to do whatever else we do that benefits them.

  2. On the other hand, it probably wouldn’t be as well-known if he had called it “On A Particular Specific Type of Lying” or whatever. On the other other hand, he’s not particularly well-known among lay audiences anyway.

  3. I distinguish mainstream politics from niche politics because niche politics may have limited room for more funding, so the same reasoning doesn’t always apply.

  4. The United States has individual campaign contribution limits, but you can get around these by donating to many campaigns or by donating to PACs.

Posted on

Good Ventures/Open Phil Should Make Riskier Grants

The Open Philanthropy Project (Open Phil) aims to follow what it calls hits-based giving, which means it makes risky bets and many of its grants may end up failing. I agree with this idea and I believe that donors should generally be less risk averse. Good Ventures, the foundation that financially backs Open Phil, behaves more conservatively than a “hits-based” approach would predict, and it probably ought to take greater risks in the interest of doing more good.

What do risk-averse and risk-neutral approaches look like?

Let’s begin by talking about investing. Suppose you’re an investor with a billion dollars. You’re risk-averse: if you could take a bet with a 75% chance of doubling your money and a 25% chance of losing it all, you wouldn’t take it, even though this bet has a high expected value–you value your first billion dollars a lot more than you value your second billion. So how do you invest?

  • Put some of your money into bonds, which provide a safe and stable (but small) return on investment.
  • Put most of your money into riskier ventures like stocks, real estate, and private equity, but buy many different assets to ensure good diversification.

In contrast, a risk-neutral investor is someone who doesn’t care about risk–they don’t care whether they have a billion dollars, or a 50% shot at two billion with a 50% chance of nothing at all. Not everyone agrees on what risk-neutral investors should do, especially because essentially no investors don’t care about risk1, but it involves atypical behaviors like buying stocks with leverage.

Similarly, suppose you’re a risk-averse donor with a billion dollars: all else equal, you prefer to do more good, but you also want to make sure that you have a guaranteed positive impact. What do you do with your money?

  • Give some of your money to safe bets that have a high probability of doing good–things like GiveDirectly and the Against Malaria Foundation.
  • Give most of your money to many riskier ventures with high expected value. Many of them will fail, but you can support lots of projects, so you have a high chance of doing a lot of good–more than you would just by giving all your money to GiveDirectly.
Continue reading
Posted on

Where I Am Donating in 2016

Part of a series for My Cause Selection 2016. For background, see my writings on cause selection for 2015 and my series on quantitative models.


In my previous essay, I explained why I am prioritizing animal advocacy as a cause area. In this essay, I decide where to donate. I share some general considerations, briefly discuss some promising organizations I did not prioritize, and then list my top candidates for donation and explain why I considered them. I conclude with a final decision about where to donate.

This year, I plan on donating $20,000 to the Good Food Institute (GFI), which primarily works to foster the development of animal product alternatives through supporting innovation and promoting research. I believe it has an extraordinarily large expected effect on reducing animal consumption and contributing to improving societal values.

My writeup last year persuaded people to donate a total of about $40,000 to my favorite charities; if I move a similar amount this year, I believe GFI will still have substantial room for more funding even after that.

I will donate a few weeks after publishing this, so you have some time to persuade me if you believe I should make a different decision. Another donor plans to contribute an additional $60,000 AUD (~$45,000 USD) to GFI and is also open to persuasion.

This essay builds on last year’s document. Usually, unless I say differently here or in one of my previous writings, I still endorse most of the claims I made last year. Last year, I discussed my fundamental values and my beliefs about broad-level causes plus a handful of organizations, so I will not retread this ground.

Continue reading
Posted on

Dedicated Donors May Not Want to Sign the Giving What We Can Pledge

The Giving What We Can pledge serves as a useful way to commit to donating 10% (or more) of your income, and probably also helps show by example that donating this much money is a reasonable and achievable thing to do. I believe it serves as a useful way to commit yourself to donating if you suspect that your commitment might waver. But there are some considerations against signing the pledge, and these considerations look particularly persuasive if you already have a strong commitment to helping the world.

Some of this might be obvious, but I think it’s worth discussing—people often talk about why you should take the pledge but rarely talk about under what circumstances you shouldn’t, and the pledge isn’t the right choice for everyone. These counterpoints I raise don’t cover everything; I’m mostly drawing on my own personal experiences, and I’m sure other people have experiences that I haven’t had.

Losing flexibility

The more you donate, the less money you have to spend on other things, and the tighter your budget becomes. Maybe you’re earning more money than you need, in which case you can donate all your spare income with no trouble.

But it’s important to remember how your money needs will change over time. Maybe you will have no problem keeping up your donations for the next few years, but things could change. You might decide to have children, which will dramatically increase your expenditures (although some people with kids still donate a lot). You might start a startup or a non-profit, or take a job at a non-profit where you won’t be making much. Many people consider pledging in college, when it’s hard to anticipate your expenses as a young adult, and anyone at any age can have unexpected medical expenses or life-changing circumstances. Before you commit to donate some amount of money, make sure you will still be able to afford it in the future.

Most people’s expenditures increase throughout their lives but their income increases as well, so they shouldn’t have a problem keeping up the same rate of donation. That said, do consider whether you expect your income to increase as much or more than your spending.

People might be reluctant to take a job doing direct work if that would compromise their ability to fulfill their pledge. Since there are a lot of opportunities to do good in direct work that may be more valuable than donating 10%, we wouldn’t want to discourage the former in pursuit of the latter.

Overjustification effect

I recently caught myself following this chain of reasoning:

  1. I would like to donate a sizable chunk of my income in 2017 because donating money helps the world.
  2. I pledged to donate 20% of my income, so I need to donate at least $25,000.
  3. Therefore I will donate $25,000, because I pledged that I would.
  4. If I donate $25,000, max out my 401(k), and exercise my stock options, I will have negative cash flow for 2017. That is bad.
  5. Of these three big expenditures, the pledge is the least important, since keeping my word on something like this doesn’t matter as much as being able to retire comfortably. So maybe I should donate less.1

This reasoning obviously doesn’t make sense—I initially wanted to donate because I thought it would help the world, not just because it’s what I said I would do. But after I promised to donate 20% of my income, I forgot my original motivation and only thought about the pledge.

This is a version of the overjustification effect: if you get an extrinsic incentive to do something, it reduces your intrinsic motivation. I saw this happen to myself when taking the pledge reduced my intrinsic motivation to donate. Fortunately, I figured out what happened and reminded myself that donating money has inherent value and it’s not only about keeping a promise.

So even though you give yourself an external incentive to increase your commitment to doing a good thing, sometimes it can paradoxically decrease your commitment by reducing your intrinsic motivation. This could even reduce how much you donate—perhaps you would have donated 15%, but since you pledged to donate 10%, now you only donate the amount that you committed to.


I believe that people usually should take the Giving What We Can pledge. Most people in the effective altruism community donate less than 10% of their income—if more people took the pledge, we would see more donations, which would help the world. If you suspect that your future commitment to doing good may waver, you could take the pledge as a way to keep yourself on track. But before you do take it, consider some relevant factors:

  1. Are you going to need a lot of money at some point in the future, such that it will become harder for you to keep donating as much?
  2. Might you want to focus a lot of time on doing good in some way that prevents you from making much money, such as starting a non-profit; and if so, would you be able to continue donating the same percentage of your income?

I expect most people should be able to donate 10% of their income (although I’m not in a great position to judge since I have been lucky enough never to have to live on a low salary). I pledged to donate 20% of my income, and while I expect that I will always be able to donate that much, it does substantially limit me in some ways—most obviously, it makes it harder for me to switch to a job that pays a low salary. I regret pledging to donate this much, and perhaps I should not have pledged at all. I do place high value on keeping my word, so the Giving What We Can pledge could potentially help keep me committed; but I was already committed to donating and probably would have donated as much if I had not signed the pledge, so signing it only imposed limits on me without providing any benefits.

Some readers may be in a similar position to where I was before I signed the Giving What We Can pledge. If so, you should consider what benefits the pledge provides you and how it might hurt you, and decide if it makes sense given your personal circumstances.


  1. For the record, I am not going to break my pledge in 2017 and I have no intention of ever doing so.

Posted on

Is Preventing Global Poverty Good?

This essay could perhaps be called, “is improving people’s economic station good?” because all the arguments presented here apply just as well to people in the developed world as to people in the developing world. But my readers generally don’t spend a lot of effort trying to make people in developed countries wealthier, and lots of them do do a lot to make the globally poor better off.

When people calculate the effects of efforts to alleviate global poverty, they tend to report figures in terms like QALYs/dollar. I believe this is misleading; it gives a reasonably precise measure of the direct first-order effects, while ignoring the other complicated consequences of interventions. These other complicated effects include some negative and some positive ones, but they tend to significantly outweigh the first order effects. Instead of evaluating global poverty interventions by the magnitude of their first order effects, we should look at their indirect effects as well.

Let’s divide the effects into three categories: direct, medium-term, and long-term. We have the most certainty about direct effects, but long-term effects matter most.

Direct effects

Donating to global poverty1 has one important direct effect: the people you’re donating to are better off. In the case of GiveDirectly, this happens because they have more money; for deworming charities, people are better off because they don’t have worms anymore, which likely substantially improves their quality of life2. We have better evidence about these direct effects than about anything else, and when people talk about the benefits of GiveWell top charities, they almost exclusively talk about these effects.

Medium-term effects

Global poverty interventions have some indirect effects that are still easier to quantify and reason about than their long-term effects on economic or technological progress (which I discuss in “Long-term effects” below).

  1. Making people wealthier causes them to eat more factory-farmed animals.
  2. Making people wealthier causes them to consume more environmental resources, reducing wild animal populations.
  3. Making people wealthier worsens climate change, increasing existential risk.
  4. Global poverty interventions (probably?) decrease fertility.

Kyle Bogosian’s calculations suggest that increasing someone’s income in the developing world by $1000 causes about 15 days of land animal suffering, which probably means that a donation to GiveDirectly does more harm via creating factory farming than it does good via making a human financially better off. His calculations rely on uncertain estimates and I do not consider it robust, although it does draw on some hard data and it would probably be possible to substantially reduce the error bars on this estimate.

Additionally, making people wealthier causes them to consume more environmental resources. This has lots of effects, but the most direct one is that it reduces wild animal populations. If wild animal lives are net negative on balance, we should want to reduce the number of wild animals. I have not seen any adequate estimates of the effect size here so I cannot say how much it matters, but I suspect that it outweighs both the direct effect of making humans wealthier and the effect of increasing factory farming.

At the same time, if people consume more environmental resources, that contributes to climate change. This has an unclear effect on wild animal suffering but certainly increases the risk of extinction from severe runaway global warming.

Global poverty interventions probably decrease fertility. Unlike the other topics discussed in this section, lots of research has actually been done on this question, although I haven’t read it so I can’t make any confident claims about it. But what I know suggests that making people healthier and wealthier probably causes them to have fewer kids, which reduces population size after a generation. This could be good or bad for all of the many reasons that people existing could be good or bad. Most directly, this means that fewer people get to live decent lives, which is bad. But it also means more suffering on factory farms. Fewer people possibly means less converting wild habitats and therefore more wild animal suffering, but at the same time, wealthier people consume more environmental resources, and I’d expect the increased-wealth factor to matter more than the reduced-population factor (wealthy countries consume way more environmental resources than poor countries).

Finally, and perhaps most importantly, promoting global poverty attracts more interest in effective altruism and possibly has spillover effects into other cause areas. Many people get into effective altruism by learning about how much good they can do by donating to global poverty efforts, and later

Long-term effects

Improving people’s standard of living has lots of potential long-term effects. Of the plausible effects I’ve thought of, these look the most important3:

  1. It could increase international stability by reducing scarcity/competition.
  2. It could decrease international stability by adding more major actors, making coordination more difficult.
  3. It probably increases research on dangerous technology4.
  4. It probably increases research on beneficial technology.
  5. It becomes harder to make differential intellectual progress.
  6. It causes humans to consume more environmental resources.
  7. It could decrease international stability by making arms races more likely.

Increasing international stability could be beneficial or harmful. Greater international stability reduces the chance of war, which reduces existential risk from nuclear or biological weapons (or other existentially threatening weapons of war that have yet to be developed). In a more stable world, nations probably have an easier time creating cooperative agreements to avoid arms races. Additionally, increasing stability probably increases technological research, which could be beneficial or harmful.

Technological research both increases our ability to avert catastrophes and increases the potential danger arising from of powerful technologies. I suspect that improved technology generally increases the probability of an existential catastrophe. Right now our greatest existential threats arise from technologies developed by humans, and our technological development has done comparatively little to alleviate natural existential risks. I would expect this trend to continue.

If we increase global technological development, that probably makes it harder to make differential intellectual progress, which means that we have a more difficult time accelerating existential risk research to the point where it moves faster than research that increases existential risks. Helping poor people in countries like Malawi and Kenya probably doesn’t have much effect on technological development, but we should expect it to have at least small effects in the long run, at least in expectation.

Consuming more environmental resources could be either good or bad. As discussed before, it reduces habitats which reduces wild animal suffering, and this is a good thing if wild animals’ lives are net negative. (Although climate change could increase wild animal populations.) It probably reduces existential risk by reducing the chance of runaway global warming, which is probably but not definitely good.

Confidently claiming that global poverty has either good or bad long-term effects requires making conjunctive claims about uncertain effects: for global poverty reduction to be beneficial, it must increase international stability and this must reduce arms races and the chance of a major nuclear war and preventing human extinction must be good; or it must reduce international stability and this must slow technological development and it must be beneficial to slow technological development (and, again, preventing human extinction must be good); or it must increase climate change and this reduces wild animal populations without substantially increasing existential risk and wild animal lives are net negative; etc. Claiming that global poverty does long-term harm requires making similarly complex conjunctive arguments.

Although I cannot make any confident claims here, I lean toward expecting these effects, in order of confidence:

  1. Global poverty alleviation accelerates dangerous technology more rapidly than beneficial technology, which is bad.
  2. Global poverty alleviation increases international cooperation, which is good.
  3. Global poverty alleviation increases climate change, which is bad.

I tend to think that this first effect outweighs the second and I am more confident that it occurs, so I am weakly confident that making people wealthier has net negative effects in the long run.


Alleviating global poverty has second- and third-order effects, some of them good and some of them harmful; these effects are significant and hard to quantify. Without serious engagement with the long term effects of global wealth, we cannot confidently claim that global poverty interventions do net good or harm.

The evidence tends to point weakly toward the conclusion that making people wealthier does net harm. This makes me wary of supporting efforts to alleviate global poverty. On the other hand, the evidence is nowhere near strong enough to justify trying to prevent people from helping the global poor. Any arguments in this essay about the long-term effects of global poverty should be treated as highly speculative. Nonetheless, this suggests that we cannot confidently claim that global poverty prevention helps the world in the long run.


  1. When I talk about donating to global poverty, I’m mostly talking about GiveWell top charities, because that’s what readers primarily donate to.

  2. For GiveWell’s other top recommended charity, the Against Malaria Foundation (AMF), people primarily tout its life-saving benefits, but I believe these are severely overstated. Still, AMF does prevent people from getting non-fatal cases of malaria which improves their quality of life, although I don’t know how valuable this is compared to cash transfers or deworming.

  3. I’m sure I’m missing lots of important effects. I don’t believe people address these sorts of issues enough, so I expect there exist lots of significant long-term effects that haven’t been explored.

  4. Developing countries currently have meager research output compared to wealthy nations, but this changes as countries become richer.

Posted on

← Newer Page 1 of 6