Should Patient Philanthropists Invest Differently?

TLDR: No.

Confidence: Somewhat likely.

Summary

Some philanthropists discount the future much less than normal people. For philanthropists with low discount rates, does this change how they invest their money? Can they do anything to take advantage of other investors’ high time discounting?

We can answer this question in two different ways.

Should low-discount philanthropists invest differently in theory? No. [More]

Should low-discount philanthropists invest differently in practice? The real world differs from the standard theoretical approach in a few ways. These differences suggest that low-discount philanthropists should favor risky and illiquid investments slightly more than high-discount investors do. But the difference is too small to matter in practice. [More]

Continue reading
Posted on

Two Types of Scientific Misconceptions You Can Easily Disprove

There are two opposite types of scientific claims that are easy to prove wrong: claims that never could have been proven in the first place, and claims that directly contradict your perception.

Type 1: The unknowable misconception

A heuristic for identifying scientific misconceptions: If this were true, how would we know?

Example: “People swallow 8 spiders per year during sleep.” If you were a scientist and you wanted to know how often people swallow spiders, how would you figure it out? You’d have to do a sleep study for thousands of person-nights where you film people sleeping using a night vision camera that’s so high-quality that it can pick up something as small as a spider (which, as far as I know, doesn’t exist) and then pore over the tens of thousands of hours of footage by hand to look for spiders (because this factoid originated in a time when computer software wasn’t sophisticated enough to do it for you) and track the locations of the spiders and count how often they crawl into people’s mouths without coming back out. This is all theoretically possible but it would be insanely expensive and who would be crazy enough to do it?

Example: “The average man thinks about sex once every 7 seconds.” People can’t even introspect on their own thoughts on a continuous basis, how would scientists do it? This one seems simply impossible to prove, regardless of how big your budget is or how crazy you are.

Type 2: The perception-contradicting misconception

You can disprove some common misconceptions using only your direct perception.

Example: “Certain parts of the tongue can only detect certain tastes.” You can easily disprove this by placing food on different spots on your tongue.

Example: “You need two eyes to have depth perception.” Close one eye. Notice how you still have depth perception. Or look at a photograph, which was taken from a fixed perspective. Notice how you can still detect depth in the photograph.

You need two points of reference to get “true” depth perception, but the human visual cortex can almost always infer depth based on how a scene looks. It’s possible to trick your depth perception, like this:

But in almost all real-world situations, you can correctly perceive depth with only one eye.

Example: “You (only) have five senses.” You can prove that you have at least two more senses: balance (equilibrioception) and the position of your limbs relative to each other (proprioception).

You can prove you have a sense of balance by closing your eyes and walking without falling over. You’re not using any of the standard five senses, but you can still stay upright.

You can prove you have proprioception by closing your eyes, flailing your arms around until they’re in random positions, and then bringing your hands together until your fingertips touch. You couldn’t do this if you didn’t have a proprioceptive sense.

(Some scientists say we have more than seven senses, but the other ones are harder to prove.)

So far I’ve given examples of factoids you can trivially disprove in five seconds. There are more misconceptions you can disprove if you’re willing to do a tiny bit of work. Example: “Women have one more rib than men.” Find a friend of the opposite gender and count your ribs!

Posted on

Most Theories Can't Explain Why Game of Thrones Went Downhill

I’ve heard people repeat a few theories for why Game of Thrones started so well and ended so badly. Most of these theories don’t make sense.

Theory 1: David & Dan are good at adapting books, but they didn’t know what to do when they ran out of book.

Seasons 1 through 4, which adapted the first three books of A Song of Ice and Fire, are up there with the greatest television shows ever made. Season 5, which adapted the fourth book and part of the fifth book, was mediocre. If this theory were true, season 5 should have been on par with the earlier seasons, but it wasn’t.

Furthermore, season 6 was better than season 5, even though season 5 was still based on the books, and season 6 wasn’t.

Even more, David & Dan wrote some excellent original content in the earlier seasons, such as the extended arc with Arya and the Hound in season 4 (see this scene, which wasn’t in the books).

Some people say, well they know how to write short scenes, but they don’t know how to write story arcs. Then how do you explain the famously terrible dialogue in the later seasons?

Theory 2: David & Dan were always bad showrunners.

I hear this one a lot. There’s some evidence for this theory—prior to Game of Thrones, David was best known for writing X-Men Origins: Wolverine (a famously bad movie), and Dan had no prior writing credits. But if they’re bad showrunners, why were the first four seasons so good? I can buy that bad showrunners might accidentally create a pretty good show, but I don’t see how they could accidentally create one of the best shows of all time.

Theory 3: David & Dan lost interest and started phoning it in.

This explanation makes more sense because it can explain the nearly-monotonic decline in quality. But it still can’t explain why season 6 was better than season 5. And the timing doesn’t entirely work out—people usually say this about seasons 7 and 8, but season 5 was clearly worse than the previous four seasons, and it seems less plausible that they’d lose interest that early on.

Theory 4: Good writing emerges through a mysterious process that no one really understands.

This is my favorite theory. Many occasionally-great writers can’t consistently replicate their success, writers can’t tell which of their works will become popular, and nobody fully understands what makes great writing. That’s why, for example, Jane Austen thought Pride and Prejudice was her worst book, even though it’s what she’s most remembered for. Or why The Matrix is my favorite movie of all time, even though I like zero (0) other Wachowski movies. (The Wachowskis are another example of artists who occasionally produce brilliant works and most of the time don’t, and it’s not clear why.)

Or why people used to talk about good art coming from a muse—you didn’t write that brilliant story, you just wrote down the words that your muse gave you, which is just a poetic way of saying you have no idea how you came up with it.

This is kind of a non-explanation: “the reason Game of Thrones was inconsistently good is because lots of things are inconsistently good and we don’t know why.” But at least it turns a localized mystery into a much bigger mystery about the general nature of creativity.

Posted on

Philanthropists Probably Shouldn't Mission-Hedge AI Progress

Summary

Confidence: Likely. [More]

Some people have asked, “should we invest in companies that are likely to do particularly well if transformative AI is developed sooner than expected?”1

In a previous essay, I developed a framework to evaluate mission-correlated investing. Today, I’m going to apply that framework to the cause of AI alignment.

(I’m specifically looking at whether to mission hedge AI, not whether to invest in AI in general. [More])

Whether to mission hedge crucially depends on three questions:

  1. What is the shape of the utility function with respect to AI progress?
  2. How volatile is AI progress?
  3. What investment has the strongest correlation to AI progress, and how strong is that correlation?

I came up with these answers:

  1. Utility function: No clear answer, but I primarily used a utility function with a linear relationship between AI progress and the marginal utility of money. I also looked at a different function where AI timelines determine how long our wealth gets to compound. [More]
  2. Volatility: I looked at three proxies for AI progress—industry revenue, ML benchmark performance, and AI timeline forecasts. These proxies suggest that the standard deviation of AI progress falls somewhere between 4% and 20%. [More]
  3. Correlation: A naively-constructed hedge portfolio would have a correlation of 0.3 at best. A bespoke hedge (such as an “AI progress swap”) would probably be too expensive. An intelligently-constructed portfolio might work better, but I don’t know how much better. [More]

Across the range of assumptions I tested, mission hedging usually—but not always—looked worse on the margin2 than investing in the mean-variance optimal portfolio with leverage. Mission hedging looks better if the hedge asset is particularly volatile and has a particularly strong correlation to AI progress, and if we make conservative assumptions for the performance of the the mean-variance optimal portfolio. [More]

The most obvious changes to my model argue against mission hedging. [More] But there’s room to argue in favor. [More]

Continue reading
Posted on

The Review Service Vortex of Death

Consumers want to know which products are good without having to buy them first. One way to do that is by reading reviews.

There are third-party services that provide product reviews. Unfortunately, almost all of them are useless because they inevitably fall into the Review Service Vortex of Death.

Review services have a fundamental incentives problem stemming from two facts:

  1. Businesses don’t want to get bad reviews and they will pay a lot of money to have fake good reviews written, or to have bad reviews removed.
  2. Consumers can’t tell when a review service is removing bad reviews.

Therefore, any review service, even if consumers pay to use it, is incentivized to accept businesses’ money to remove bad reviews, thus making the service useless for consumers. And it can get away with this behavior for a long time.

The biggest reviewers—including Better Business Bureau, Yelp, and Trustpilot—have all fallen into the Review Service Vortex of Death, and should not be trusted by consumers, but they continue to be used because it’s not common knowledge that they delete bad reviews in exchange for money. (And indeed, it’s hard to even prove that they do.)

What can consumers do about this? I don’t know. Businesses like Amazon, that make their money from retail sales, are less likely to fall into the Vortex, but they’re still vulnerable to businesses giving themselves fake reviews.

(I did write a review article one time—a review of donor-advised fund providers—and a couple of providers have subsequently emailed me to ask me to include them in my article. But sadly1, they didn’t offer me any bribes.)

Notes

  1. This is a joke. If a company offered me a bribe to include them, then I’d have to exclude them as a matter of principle. 

Posted on

Index Funds That Vote Are Active, Not Passive

As an investor, you might invest in an index fund because you want to get unbiased, diversified exposure to the market. You don’t have to figure out which stocks will beat the market if you simply buy every stock.

But when you invest in an index fund, usually that means the fund can now vote on your behalf. The more stock an index fund owns, the more voting power it has. Generally speaking, the big index fund providers (including Vanguard and BlackRock) will vote in ways that align with their own corporate values—their top (stated) priorities are to increase climate change mitigation and workforce gender/racial diversity.

Regardless of whether you want this voting behavior, it means these index funds are not passive. By putting your money in an index fund that votes, you are implicitly claiming that it will make better voting decisions than the market.

(For that matter, any time you hold something other than the global market portfolio, you’re making an active bet. Sadly (and surprisingly), there aren’t any single index funds that offer the global market portfolio. But I digress.)

Posted on

A Preliminary Model of Mission-Correlated Investing

Summary

TLDR: According to my preliminary model, the altruistic investing portfolio should ultimately allocate 5–20% on a risk-adjusted basis to mission-correlated investing. But for the current EA portfolio, it’s better on the margin to increase its risk-adjusted return than to introduce mission-correlated investments.

Last updated 2022-04-06.

The purpose of mission-correlated investing is to earn more money in worlds where your money matters more. For instance, if you’re working to prevent climate change, you could buy stock in oil companies. In worlds where oil companies are more successful and climate change gets worse, you make more money.

Previous work by Roth Tran (2019)1 proved that, under certain weak assumptions, philanthropists should invest more in so-called “evil” companies than they would from a pure profit-making standpoint. This result follows from the assumption that a philanthropist’s actions become more cost-effective when the world gets worse along some dimension.

That’s an interesting result. But all it says is altruists should invest more than zero in mission hedging. How much more? Am I supposed to allocate 1% of my wealth to mission-correlated assets? 5%? 100%?

To answer this question, I extended the standard portfolio choice problem to allow for mission-correlated investing. This model makes the same assumptions as the standard problem—asset prices follow lognormal distributions, people experience constant relative risk aversion, etc.—plus the assumption that utility of money increases linearly with the quantity of the mission target, e.g., because the more CO2 there is in the atmosphere, the cheaper it is to extract.

I used this model to find some preliminary results. Future work should further explore the model setup and the relevant empirical questions, which I discuss further in the future work section.

Here are the answers the model gives, with my all-things-considered confidence in each:

  • Given no constraints, philanthropists should allocate somewhere between 2% and 40% to mission hedging on a risk-adjusted basis,2 depending on what assumptions we make. Confidence: Somewhat likely. [More]
  • Given no constraints, and using my best-guess input parameters:
    • Under this model, a philanthropist who wants to hedge a predictable outcome, such as CO2 emissions, should allocate ~5% (risk-adjusted) to mission hedging.
    • Under this model, a philanthropist who wants to hedge a more volatile outcome, for example AI progress, should allocate ~20% to mission hedging on a risk-adjusted basis.
  • If you can’t use leverage, then you shouldn’t mission hedge unless mission hedging looks especially compelling. Confidence: Likely. [More]
  • If you currently invest most of your money in a legacy investment that you’d like to reduce your exposure to, then it’s more important on the margin to seek high expected return than to mission hedge. Confidence: Likely. [More]
  • The optimal allocation to mission hedging is proportional to: (Confidence: Likely)
    1. the correlation between the hedge and the mission target being hedged;
    2. the standard deviation of the mission target;
    3. your degree of risk tolerance;
    4. the inverse of the standard deviation of the hedge.

Cross-posted to the Effective Altruism Forum.

Continue reading
Posted on

How I Estimate Future Investment Returns

To make informed investing decisions, I want to estimate the future expected return of my portfolio. Markets are unpredictable, and future returns will likely significantly deviate from estimates—AQR believes there’s a 50% chance that 10-year realized equity returns will differ from their predictions by more than 3% per year. Still, it’s helpful to come up with a median guess.

In this post, I explain the projections that I use for my own financial planning.

Last updated 2022-06-16.

Continue reading
Posted on

Can Good Writing Be Taught?

Epistemic status: Highly speculative; unburdened by any meaningful supporting evidence.

I’ve written something like 200 essays for school. Writing those essays did not teach me how to write. Writing for fun taught me how to write.

When I was in high school, I used to complain that the essays I was required to write were both boring and unhelpful, and I’d learn more by writing essays about whatever I wanted. But if my teachers had let students write whatever they wanted, I don’t think most of them would have gotten very far. I don’t think I would have gotten very far, either. There’s a big difference between

me: I have an idea! I will write about it!

versus

teacher: Please have an idea and write about it.

me: What should I write about? I dunno, I guess I could write about X, I can probably force myself to come up with something to say about it.

Instead of writing something detached from ordinary life, like literary analysis, should high schoolers be taught to write something relevant, like emails?

In fact, I was taught how to write emails in high school (although that was only a small % of what we did), and the teaching was counterproductive. The way my teachers taught me to write emails was significantly wrong, and probably would have hindered my career if I had listened. (As a basic example, they said to always start an email with “Dear [name]”. Nobody starts emails that way in real life.) All the people with jobs who write emails somehow managed to un-learn the anti-lessons that they were taught.

But even if my teachers had taught me how to write emails correctly, it wouldn’t have mattered. If I have to slog through a purposeless assignment that I don’t care about, anything I learn from it doesn’t stick. I only learn from doing things if I’m doing them for a reason.

In conclusion, it’s impossible to force someone to learn good writing. They have to want to write.

Posted on

Existential Risk Reduction Is Naive (And That's a Good Thing)

I see many people criticize existential risk reduction as naive or arrogant. “What, you think you can save the world?”

I’m not going to dispute this. Yes, it’s naive and arrogant, and that’s a good thing.

There are countless movies about saving the world. Lots of people fantasize about saving the world (or, at least, my friends and I did when we were kids, and I still do). Ask any five-year-old child, and they can tell you that saving the world is awesome. But it takes a particularly subtle and clever mind to understand that actually, trying to save the world is a silly waste of time.

But actually, the five-year old was correct all along. Saving the world is, in fact, awesome! We should do it!

The mature, adult response is that you can’t save the world, and you should be content with contributing to society in your own small way. I could make some clever argument about scope sensitivity or universalist morality or something, but I don’t need to. You already know that saving the world is awesome. Everybody knows it, they’ve just forgotten.

Climate change is the only mainstream cause that at least has a plausible case for saving the world. And indeed some climate change activists think in those terms. Even though I believe it’s unlikely that mitigating climate change can save the world, it’s still admirable to try. I would like to see more people try. Ask yourself: What could destroy the world, and how do we stop that from happening?

Posted on

← Newer Page 1 of 11