Philanthropists Probably Shouldn't Mission-Hedge AI Progress

Summary

Confidence: Likely. [More]

Some people have asked, “should we invest in companies that are likely to do particularly well if transformative AI is developed sooner than expected?”1

In a previous essay, I developed a framework to evaluate mission-correlated investing. Today, I’m going to apply that framework to the cause of AI alignment.

(I’m specifically looking at whether to mission hedge AI, not whether to invest in AI in general. [More])

Whether to mission hedge crucially depends on three questions:

  1. What is the shape of the utility function with respect to AI progress?
  2. How volatile is AI progress?
  3. What investment has the strongest correlation to AI progress, and how strong is that correlation?

I came up with these answers:

  1. Utility function: No clear answer, but I primarily used a utility function with a linear relationship between AI progress and the marginal utility of money. I also looked at a different function where AI timelines determine how long our wealth gets to compound. [More]
  2. Volatility: I looked at three proxies for AI progress—industry revenue, ML benchmark performance, and AI timeline forecasts. These proxies suggest that the standard deviation of AI progress falls somewhere between 4% and 20%. [More]
  3. Correlation: A naively-constructed hedge portfolio would have a correlation of 0.3 at best. A bespoke hedge (such as an “AI progress swap”) would probably be too expensive. An intelligently-constructed portfolio might work better, but I don’t know how much better. [More]

Across the range of assumptions I tested, mission hedging usually—but not always—looked worse on the margin2 than investing in the mean-variance optimal portfolio with leverage. Mission hedging looks better if the hedge asset is particularly volatile and has a particularly strong correlation to AI progress, and if we make conservative assumptions for the performance of the the mean-variance optimal portfolio. [More]

The most obvious changes to my model argue against mission hedging. [More] But there’s room to argue in favor. [More]

Continue reading
Posted on

The Review Service Vortex of Death

Consumers want to know which products are good without having to buy them first. One way to do that is by reading reviews.

There are third-party services that provide product reviews. Unfortunately, almost all of them are useless because they inevitably fall into the Review Service Vortex of Death.

Review services have a fundamental incentives problem stemming from two facts:

  1. Businesses don’t want to get bad reviews and they will pay a lot of money to have fake good reviews written, or to have bad reviews removed.
  2. Consumers can’t tell when a review service is removing bad reviews.

Therefore, any review service, even if consumers pay to use it, is incentivized to accept businesses’ money to remove bad reviews, thus making the service useless for consumers. And it can get away with this behavior for a long time.

The biggest reviewers—including Better Business Bureau, Yelp, and Trustpilot—have all fallen into the Review Service Vortex of Death, and should not be trusted by consumers, but they continue to be used because it’s not common knowledge that they delete bad reviews in exchange for money. (And indeed, it’s hard to even prove that they do.)

What can consumers do about this? I don’t know. Businesses like Amazon, that make their money from retail sales, are less likely to fall into the Vortex, but they’re still vulnerable to businesses giving themselves fake reviews.

(I did write a review article one time—a review of donor-advised fund providers—and a couple of providers have subsequently emailed me to ask me to include them in my article. But sadly1, they didn’t offer me any bribes.)

Notes

  1. This is a joke. If a company offered me a bribe to include them, then I’d have to exclude them as a matter of principle. 

Posted on

Index Funds That Vote Are Active, Not Passive

As an investor, you might invest in an index fund because you want to get unbiased, diversified exposure to the market. You don’t have to figure out which stocks will beat the market if you simply buy every stock.

But when you invest in an index fund, usually that means the fund can now vote on your behalf. The more stock an index fund owns, the more voting power it has. Generally speaking, the big index fund providers (including Vanguard and BlackRock) will vote in ways that align with their own corporate values—their top (stated) priorities are to increase climate change mitigation and workforce gender/racial diversity.

Regardless of whether you want this voting behavior, it means these index funds are not passive. By putting your money in an index fund that votes, you are implicitly claiming that it will make better voting decisions than the market.

(For that matter, any time you hold something other than the global market portfolio, you’re making an active bet. Sadly (and surprisingly), there aren’t any single index funds that offer the global market portfolio. But I digress.)

Posted on

A Preliminary Model of Mission-Correlated Investing

Summary

TLDR: According to my preliminary model, the altruistic investing portfolio should ultimately allocate 5–20% on a risk-adjusted basis to mission-correlated investing. But for the current EA portfolio, it’s better on the margin to increase its risk-adjusted return than to introduce mission-correlated investments.

Last updated 2022-04-06.

The purpose of mission-correlated investing is to earn more money in worlds where your money matters more. For instance, if you’re working to prevent climate change, you could buy stock in oil companies. In worlds where oil companies are more successful and climate change gets worse, you make more money.

Previous work by Roth Tran (2019)1 proved that, under certain weak assumptions, philanthropists should invest more in so-called “evil” companies than they would from a pure profit-making standpoint. This result follows from the assumption that a philanthropist’s actions become more cost-effective when the world gets worse along some dimension.

That’s an interesting result. But all it says is altruists should invest more than zero in mission hedging. How much more? Am I supposed to allocate 1% of my wealth to mission-correlated assets? 5%? 100%?

To answer this question, I extended the standard portfolio choice problem to allow for mission-correlated investing. This model makes the same assumptions as the standard problem—asset prices follow lognormal distributions, people experience constant relative risk aversion, etc.—plus the assumption that utility of money increases linearly with the quantity of the mission target, e.g., because the more CO2 there is in the atmosphere, the cheaper it is to extract.

I used this model to find some preliminary results. Future work should further explore the model setup and the relevant empirical questions, which I discuss further in the future work section.

Here are the answers the model gives, with my all-things-considered confidence in each:

  • Given no constraints, philanthropists should allocate somewhere between 2% and 40% to mission hedging on a risk-adjusted basis,2 depending on what assumptions we make. Confidence: Somewhat likely. [More]
  • Given no constraints, and using my best-guess input parameters:
    • Under this model, a philanthropist who wants to hedge a predictable outcome, such as CO2 emissions, should allocate ~5% (risk-adjusted) to mission hedging.
    • Under this model, a philanthropist who wants to hedge a more volatile outcome, for example AI progress, should allocate ~20% to mission hedging on a risk-adjusted basis.
  • If you can’t use leverage, then you shouldn’t mission hedge unless mission hedging looks especially compelling. Confidence: Likely. [More]
  • If you currently invest most of your money in a legacy investment that you’d like to reduce your exposure to, then it’s more important on the margin to seek high expected return than to mission hedge. Confidence: Likely. [More]
  • The optimal allocation to mission hedging is proportional to: (Confidence: Likely)
    1. the correlation between the hedge and the mission target being hedged;
    2. the standard deviation of the mission target;
    3. your degree of risk tolerance;
    4. the inverse of the standard deviation of the hedge.

Cross-posted to the Effective Altruism Forum.

Continue reading
Posted on

How I Estimate Future Investment Returns

To make informed investing decisions, I want to estimate the future expected return of my portfolio. Markets are unpredictable, and future returns will likely significantly deviate from estimates—AQR believes there’s a 50% chance that 10-year realized equity returns will differ from their predictions by more than 3% per year. Still, it’s helpful to come up with a median guess.

In this post, I explain the projections that I use for my own financial planning.

Last updated 2022-06-16.

Continue reading
Posted on

Can Good Writing Be Taught?

Epistemic status: Highly speculative; unburdened by any meaningful supporting evidence.

I’ve written something like 200 essays for school. Writing those essays did not teach me how to write. Writing for fun taught me how to write.

When I was in high school, I used to complain that the essays I was required to write were both boring and unhelpful, and I’d learn more by writing essays about whatever I wanted. But if my teachers had let students write whatever they wanted, I don’t think most of them would have gotten very far. I don’t think I would have gotten very far, either. There’s a big difference between

me: I have an idea! I will write about it!

versus

teacher: Please have an idea and write about it.

me: What should I write about? I dunno, I guess I could write about X, I can probably force myself to come up with something to say about it.

Instead of writing something detached from ordinary life, like literary analysis, should high schoolers be taught to write something relevant, like emails?

In fact, I was taught how to write emails in high school (although that was only a small % of what we did), and the teaching was counterproductive. The way my teachers taught me to write emails was significantly wrong, and probably would have hindered my career if I had listened. (As a basic example, they said to always start an email with “Dear [name]”. Nobody starts emails that way in real life.) All the people with jobs who write emails somehow managed to un-learn the anti-lessons that they were taught.

But even if my teachers had taught me how to write emails correctly, it wouldn’t have mattered. If I have to slog through a purposeless assignment that I don’t care about, anything I learn from it doesn’t stick. I only learn from doing things if I’m doing them for a reason.

In conclusion, it’s impossible to force someone to learn good writing. They have to want to write.

Posted on

Existential Risk Reduction Is Naive (And That's a Good Thing)

I see many people criticize existential risk reduction as naive or arrogant. “What, you think you can save the world?”

I’m not going to dispute this. Yes, it’s naive and arrogant, and that’s a good thing.

There are countless movies about saving the world. Lots of people fantasize about saving the world (or, at least, my friends and I did when we were kids, and I still do). Ask any five-year-old child, and they can tell you that saving the world is awesome. But it takes a particularly subtle and clever mind to understand that actually, trying to save the world is a silly waste of time.

But actually, the five-year old was correct all along. Saving the world is, in fact, awesome! We should do it!

The mature, adult response is that you can’t save the world, and you should be content with contributing to society in your own small way. I could make some clever argument about scope sensitivity or universalist morality or something, but I don’t need to. You already know that saving the world is awesome. Everybody knows it, they’ve just forgotten.

Climate change is the only mainstream cause that at least has a plausible case for saving the world. And indeed some climate change activists think in those terms. Even though I believe it’s unlikely that mitigating climate change can save the world, it’s still admirable to try. I would like to see more people try. Ask yourself: What could destroy the world, and how do we stop that from happening?

Posted on

Altruistic Investors Care About Other Altruists' Portfolios

Confidence: Highly likely.

In some sense, altruists and traditional investors have the same investing goals—they want to own the portfolio with the best balance of return and risk. But self-interested people only care about their own portfolios. If you’re a philanthropist, you also care about other (value-aligned) philanthropists’ portfolios.

When the market goes up, you have more money, and you can donate more to charity. But other altruists also have more money, and they can donate more to charity, so your money isn’t as valuable. Conversely, when markets go down, you have less money to donate at the exact time when charities need funding the most.

That means you should not (necessarily) invest your money in the best overall portfolio. Instead, you should use your investments to move the pool of altruistic money in the direction of optimal.

An illustration:

Alice and Bob both donate to the Against Malaria Foundation (AMF). (For simplicity, let’s say they’re the only two donors.) AMF has diminishing marginal utility of money—once it distributes malaria nets in all the best places, the next round of nets won’t save quite as many lives. So Alice and Bob prefer to invest in a way that will earn good returns but without too much risk. Ideally, they’d both hold something like the total world stock market.

Bob lives in the United States, and he invests all his money in US stocks. Alice could simply buy the global stock portfolio, which is roughly 50% US stocks and 50% international stocks. But that would put their aggregate portfolio at 75% US stocks, 25% international stocks (assuming Alice and Bob have the same amount of money). So AMF is being funded by an investment portfolio that’s overweighted toward US stocks, which adds risk without any reward to compensate.

Alice can fix this by investing her entire portfolio in non-US stocks. Now the aggregate portfolio of AMF donors is 50% US stocks, 50% international stocks, just as it should be. It wouldn’t make sense for someone to hold 0% US stocks in their personal retirement portfolio, but this strategy works for Alice because she’s a philanthropist.

(Of course, Alice could also talk to Bob and persuade him to diversify his investments, which might be an even better idea!)

Now, I’m not trying to say that the global stock market is the best investment, or that Alice did the exact right thing in this scenario. This is just an illustration of a broader point: for an individual altruistic donor, the best investment portfolio on the margin might not be the same thing as the best overall portfolio. And altruists should pick the portfolio that’s best on the margin.

(I wrote this post to provide an easy reference for this concept. The concept is not original to me—I originally heard it from Paul Christiano’s Risk aversion and investment (for altruists).)

Posted on

Should Earners-to-Give Work at Startups Instead of Big Companies?

Summary

Confidence: Somewhat likely.

Cross-posted to the Effective Altruism Forum.

Effective altruist earners-to-give might be able to donate more money if, instead of working at big companies for high salaries, they work at startups and get paid in equity. Startups are riskier than big companies, but EAs care less about risk than most people.

Working at a startup is easier than starting one. It doesn’t pay as well, but based on my research, it looks like EA startup employees can earn more than big company employees in expectation.

Does the optimal EA investment portfolio include a significant allocation to startups? To answer that question, I estimated the expected return and risk of startups by adding up the following considerations:

  1. Find a baseline of startup performance by looking at historical data on VC firm returns.
  2. VC performance is somewhat persistent. EAs can beat the average by working at startups that the top VC firms invest in.
  3. Startup employees get worse equity terms than VCs, but they also don’t have to pay management fees, and they get meta-options. Overall, employees come out looking better than VCs.
  4. Current market conditions suggest that future performance will be worse than past performance.
  5. Startups are much riskier than publicly-traded stocks, and the startup market is moderately correlated with stocks (r=0.7).

All things considered, my best guess is that more earners-to-give should consider working at startups.

Continue reading
Posted on

Obvious Investing Facts

Last updated 2022-11-18.

Many investors, even professionals, are ignorant about obvious facts that they really should know. When I say a fact is “obvious”, what I mean is that you can easily observe it by looking at widely-available data using simple statistical tools.

A list of obvious but underappreciated facts:

Fact 1. A single large-cap stock is about 2x as volatile as the total stock market. A small-cap stock is about 4x as volatile. It’s common knowledge that individual stocks are risky, but most people don’t know how to quantify the risk, and I believe they tend to underestimate it. I wrote a whole essay about this because I think it’s the most important underappreciated investing fact.

Continue reading
Posted on

Page 2 of 12