Utilitarianism Isn't About Doing Bad Things for the Greater Good. It's About Doing the Most Good

In the eyes of popular culture (and in the eyes of many philosophy professors), the essence of utilitarianism is “it’s okay to do bad things for the greater good.” In my mind, that’s not the essence of utilitarianism. The essence is, “doing more good is better than doing less good.”

Utilitarianism is about doing the most good. You don’t do the most good by fretting over weird edge cases where you can harm someone to help other people. You do the most good by picking up massive free wins like donating to effective charities where money does 100x more good than it would if you spent it on yourself.

(Richard Y. Chappell might call this beneficentrism: “the view that promoting the general welfare is deeply important, and should be amongst one’s central life projects.” You can be a beneficentrist without being a utilitarian, but if you’re a utilitarian, you have to be a beneficentrist, and as a utilitarian, being a beneficentrist is much more important than being a “do bad things for the greater good”-ist.)

Posted on

The United States Is Weird

The United States is exceptional along many dimensions. Sometimes, people pick two of these dimensions and try to argue that one causes the other. And that’s probably true sometimes. But “the USA is #1 in the world on dimension X, and #1 on dimension Y” isn’t much evidence that X causes Y.

The United States has:

  1. the 3rd largest population, and the largest population of any first-world country (by far)
  2. the 3rd or 4th largest land area (depending on how you measure USA’s and China’s land)
  3. the highest GDP of any country
  4. the highest median income, the 8th highest GDP per capita (PPP), and the highest GDP per capita of any large country (the top 7 countries combined have a lower population than California)
  5. unusually high income inequality for a developed country
  6. the highest healthcare expenditure per capita
  7. the highest gun ownership per capita (with double the gun ownership of the #2 country)
  8. an unusually high homicide rate for a developed country
  9. the most Nobel Prize winners winners (by a huge margin) and most Fields Medalists (narrowly beating France)
  10. the highest obesity rate of any large country, and the 11th highest overall
  11. the most top universities (whatever that means) by a wide margin
  12. unusually low life expectancy for a developed country
  13. an unusually high fertility rate for a developed country
  14. the 2nd most exports (after China) and the most imports (China is #2)
  15. the most military expenditures (by a factor of 3) and the 2nd most nuclear weapons

(Those were just the examples I could come up with in an hour of research.)

(I also looked at a few stats where I thought the USA might be exceptional, but it turned out not to be: IQ, educational attainment, infant mortality1, and net immigration.)

A lot of these facts are clearly intertwined—the fact that the US has the highest GDP is related to the facts that it has the highest military expenditures, the highest healthcare expenditures, and the 2nd highest exports. (But they’re not fully intertwined, because the US still has high military and healthcare expenditures relative to GDP.)

For other facts, you can come up with narratives as to why they’re related—maybe the high obesity rate causes the low life expectancy, maybe high gun ownership causes the (relatively) high homicide rate. But maybe not. The United States is weird and I don’t have a great handle on why it’s weird (and, as far as I know, nobody else does either). Until someone comes up with a Grand Theory of National Weirdness, I’m reluctant to pick two ways in which the USA is weird and claim one causes the other.

Notes

  1. Some sources say the USA has high infant mortality. I didn’t look into this much, but the CIA World Factbook claims that the USA defines infant mortality more broadly than most countries, and if you adjust for this, infant mortality looks similar to most developed countries. 

Posted on

Should Patient Philanthropists Invest Differently?

TLDR: No.

Confidence: Somewhat likely.

Summary

Some philanthropists discount the future much less than normal people. For philanthropists with low discount rates, does this change how they invest their money? Can they do anything to take advantage of other investors’ high time discounting?

We can answer this question in two different ways.

Should low-discount philanthropists invest differently in theory? No. [More]

Should low-discount philanthropists invest differently in practice? The real world differs from the standard theoretical approach in a few ways. These differences suggest that low-discount philanthropists should favor risky and illiquid investments slightly more than high-discount investors do. But the difference is too small to matter in practice. [More]

Continue reading
Posted on

Two Types of Scientific Misconceptions You Can Easily Disprove

Last updated 2023-03-22 to add another example.

There are two opposite types of scientific claims that are easy to prove wrong: claims that never could have been proven in the first place, and claims that directly contradict your perception.

Type 1: The unknowable misconception

A heuristic for identifying scientific misconceptions: If this were true, how would we know?

Example: “People swallow 8 spiders per year during sleep.” If you were a scientist and you wanted to know how often people swallow spiders, how would you figure it out? You’d have to do a sleep study for thousands of person-nights where you film people sleeping using a night vision camera that’s so high-quality that it can pick up something as small as a spider (which, as far as I know, doesn’t exist) and then pore over the tens of thousands of hours of footage by hand to look for spiders (because this factoid originated in a time when computer software wasn’t sophisticated enough to do it for you) and track the locations of the spiders and count how often they crawl into people’s mouths without coming back out. This is all theoretically possible but it would be insanely expensive and who would be crazy enough to do it?

Example: “The average man thinks about sex once every 7 seconds.” People can’t even introspect on their own thoughts on a continuous basis, how would scientists do it? This one seems simply impossible to prove, regardless of how big your budget is or how crazy you are.

Example: “Only 7% of communication is verbal, and 93% is nonverbal.” What does that even mean? How would a scientific study quantify all the information that two people transmit during a conversation and measure its informational complexity, and then conclude that 93% is nonverbal? You can kind of measure the information in words by compressing the text, but there’s no known way to accurately measure the information in nonverbal communication.

(This factoid does come from an actual study, but what the study actually showed1 was that, among a sample of 37 university psychology students doing two different simple communication tasks, when verbal and nonverbal cues conflicted, they preferred the nonverbal cue 93% of the time.)

Type 2: The perception-contradicting misconception

You can disprove some common misconceptions using only your direct perception.

Example: “Certain parts of the tongue can only detect certain tastes.” You can easily disprove this by placing food on different spots on your tongue.

Example: “You need two eyes to have depth perception.” Close one eye. Notice how you still have depth perception. Or look at a photograph, which was taken from a fixed perspective. Notice how you can still detect depth in the photograph.

You need two points of reference to get “true” depth perception, but the human visual cortex can almost always infer depth based on how a scene looks. It’s possible to trick your depth perception, like this:

But in almost all real-world situations, you can correctly perceive depth with only one eye.

Example: “You (only) have five senses.” You can prove that you have at least two more senses: balance (equilibrioception) and the position of your limbs relative to each other (proprioception).

You can prove you have a sense of balance by closing your eyes and walking without falling over. You’re not using any of the standard five senses, but you can still stay upright.

You can prove you have proprioception by closing your eyes, flailing your arms around until they’re in random positions, and then bringing your hands together until your fingertips touch. You couldn’t do this if you didn’t have a proprioceptive sense.

(Some scientists say we have more than seven senses, but the other ones are harder to prove.)

So far I’ve given examples of factoids you can trivially disprove in five seconds. There are more misconceptions you can disprove if you’re willing to do a tiny bit of work. Example: “Women have one more rib than men.” Find a friend of the opposite gender and count your ribs!

Posted on

Most Theories Can't Explain Why Game of Thrones Went Downhill

I’ve heard people repeat a few theories for why Game of Thrones started so well and ended so badly. Most of these theories don’t make sense.

Theory 1: David & Dan are good at adapting books, but they didn’t know what to do when they ran out of book.

Seasons 1 through 4, which adapted the first three books of A Song of Ice and Fire, are up there with the greatest television shows ever made. Season 5, which adapted the fourth book and part of the fifth book, was mediocre. If this theory were true, season 5 should have been on par with the earlier seasons, but it wasn’t.

Furthermore, season 6 was better than season 5, even though season 5 was still based on the books, and season 6 wasn’t.

Even more, David & Dan wrote some excellent original content in the earlier seasons, such as the extended arc with Arya and the Hound in season 4 (see this scene, which wasn’t in the books).

Some people say, well they know how to write short scenes, but they don’t know how to write story arcs. Then how do you explain the famously terrible dialogue in the later seasons?

Theory 2: David & Dan were always bad showrunners.

I hear this one a lot. There’s some evidence for this theory—prior to Game of Thrones, David was best known for writing X-Men Origins: Wolverine (a famously bad movie), and Dan had no prior writing credits. But if they’re bad showrunners, why were the first four seasons so good? I can buy that bad showrunners might accidentally create a pretty good show, but I don’t see how they could accidentally create one of the best shows of all time.

Theory 3: David & Dan lost interest and started phoning it in.

This explanation makes more sense because it can explain the nearly-monotonic decline in quality. But it still can’t explain why season 6 was better than season 5. And the timing doesn’t entirely work out—people usually say this about seasons 7 and 8, but season 5 was clearly worse than the previous four seasons, and it seems less plausible that they’d lose interest that early on.

Theory 4: Good writing emerges through a mysterious process that no one really understands.

This is my favorite theory. Many occasionally-great writers can’t consistently replicate their success, writers can’t tell which of their works will become popular, and nobody fully understands what makes great writing. That’s why, for example, Jane Austen thought Pride and Prejudice was her worst book, even though it’s what she’s most remembered for. Or why The Matrix is my favorite movie of all time, even though I like zero (0) one (1)1 other Wachowski movie. (The Wachowskis are another example of artists who occasionally produce brilliant works and most of the time don’t, and it’s not clear why.)

Or why people used to talk about good art coming from a muse—you didn’t write that brilliant story, you just wrote down the words that your muse gave you, which is just a poetic way of saying you have no idea how you came up with it.

This is kind of a non-explanation: “the reason Game of Thrones was inconsistently good is because lots of things are inconsistently good and we don’t know why.” But at least it turns a localized mystery into a much bigger mystery about the general nature of creativity.

  1. Edit 2023-09-04: Originally I wrote zero, but I just remembered that the Wachowskis co-wrote V for Vendetta, which I enjoyed. This is an irrelevant minor detail but I am committed to factual accuracy even when it doesn’t matter. 

Posted on

Philanthropists Probably Shouldn't Mission-Hedge AI Progress

Summary

Confidence: Likely. [More]

Some people have asked, “should we invest in companies that are likely to do particularly well if transformative AI is developed sooner than expected?”1

In a previous essay, I developed a framework to evaluate mission-correlated investing. Today, I’m going to apply that framework to the cause of AI alignment.

(I’m specifically looking at whether to mission hedge AI, not whether to invest in AI in general. [More])

Whether to mission hedge crucially depends on three questions:

  1. What is the shape of the utility function with respect to AI progress?
  2. How volatile is AI progress?
  3. What investment has the strongest correlation to AI progress, and how strong is that correlation?

I came up with these answers:

  1. Utility function: No clear answer, but I primarily used a utility function with a linear relationship between AI progress and the marginal utility of money. I also looked at a different function where AI timelines determine how long our wealth gets to compound. [More]
  2. Volatility: I looked at three proxies for AI progress—industry revenue, ML benchmark performance, and AI timeline forecasts. These proxies suggest that the standard deviation of AI progress falls somewhere between 4% and 20%. [More]
  3. Correlation: A naively-constructed hedge portfolio would have a correlation of 0.3 at best. A bespoke hedge (such as an “AI progress swap”) would probably be too expensive. An intelligently-constructed portfolio might work better, but I don’t know how much better. [More]

Across the range of assumptions I tested, mission hedging usually—but not always—looked worse on the margin2 than investing in the mean-variance optimal portfolio with leverage. Mission hedging looks better if the hedge asset is particularly volatile and has a particularly strong correlation to AI progress, and if we make conservative assumptions for the performance of the the mean-variance optimal portfolio. [More]

The most obvious changes to my model argue against mission hedging. [More] But there’s room to argue in favor. [More]

Continue reading
Posted on

The Review Service Vortex of Death

Consumers want to know which products are good without having to buy them first. One way to do that is by reading reviews.

There are third-party services that provide product reviews. Unfortunately, almost all of them are useless because they inevitably fall into the Review Service Vortex of Death.

Review services have a fundamental incentives problem stemming from two facts:

  1. Businesses don’t want to get bad reviews and they will pay a lot of money to have fake good reviews written, or to have bad reviews removed.
  2. Consumers can’t tell when a review service is removing bad reviews.

Therefore, any review service, even if consumers pay to use it, is incentivized to accept businesses’ money to remove bad reviews, thus making the service useless for consumers. And it can get away with this behavior for a long time.

The biggest reviewers—including Better Business Bureau, Yelp, and Trustpilot—have all fallen into the Review Service Vortex of Death, and should not be trusted by consumers, but they continue to be used because it’s not common knowledge that they delete bad reviews in exchange for money. (And indeed, it’s hard to even prove that they do.)

What can consumers do about this? I don’t know. Businesses like Amazon, that make their money from retail sales, are less likely to fall into the Vortex, but they’re still vulnerable to businesses giving themselves fake reviews.

(I did write a review article one time—a review of donor-advised fund providers—and a couple of providers have subsequently emailed me to ask me to include them in my article. But sadly1, they didn’t offer me any bribes.)

Notes

  1. This is a joke. If a company offered me a bribe to include them, then I’d have to exclude them as a matter of principle. 

Posted on

Index Funds That Vote Are Active, Not Passive

As an investor, you might invest in an index fund because you want to get unbiased, diversified exposure to the market. You don’t have to figure out which stocks will beat the market if you simply buy every stock.

But when you invest in an index fund, usually that means the fund can now vote on your behalf. The more stock an index fund owns, the more voting power it has. Generally speaking, the big index fund providers (including Vanguard and BlackRock) will vote in ways that align with their own corporate values—their top (stated) priorities are to increase climate change mitigation and workforce gender/racial diversity.

Regardless of whether you want this voting behavior, it means these index funds are not passive. By putting your money in an index fund that votes, you are implicitly claiming that it will make better voting decisions than the market.

(For that matter, any time you hold something other than the global market portfolio, you’re making an active bet. Sadly (and surprisingly), there aren’t any single index funds that offer the global market portfolio. But I digress.)

Posted on

A Preliminary Model of Mission-Correlated Investing

Summary

TLDR: According to my preliminary model, the altruistic investing portfolio should ultimately allocate 5–20% on a risk-adjusted basis to mission-correlated investing. But for the current EA portfolio, it’s better on the margin to increase its risk-adjusted return than to introduce mission-correlated investments.

Last updated 2022-04-06.

The purpose of mission-correlated investing is to earn more money in worlds where your money matters more. For instance, if you’re working to prevent climate change, you could buy stock in oil companies. In worlds where oil companies are more successful and climate change gets worse, you make more money.

Previous work by Roth Tran (2019)1 proved that, under certain weak assumptions, philanthropists should invest more in so-called “evil” companies than they would from a pure profit-making standpoint. This result follows from the assumption that a philanthropist’s actions become more cost-effective when the world gets worse along some dimension.

That’s an interesting result. But all it says is altruists should invest more than zero in mission hedging. How much more? Am I supposed to allocate 1% of my wealth to mission-correlated assets? 5%? 100%?

To answer this question, I extended the standard portfolio choice problem to allow for mission-correlated investing. This model makes the same assumptions as the standard problem—asset prices follow lognormal distributions, people experience constant relative risk aversion, etc.—plus the assumption that utility of money increases linearly with the quantity of the mission target, e.g., because the more CO2 there is in the atmosphere, the cheaper it is to extract.

I used this model to find some preliminary results. Future work should further explore the model setup and the relevant empirical questions, which I discuss further in the future work section.

Here are the answers the model gives, with my all-things-considered confidence in each:

  • Given no constraints, philanthropists should allocate somewhere between 2% and 40% to mission hedging on a risk-adjusted basis,2 depending on what assumptions we make. Confidence: Somewhat likely. [More]
  • Given no constraints, and using my best-guess input parameters:
    • Under this model, a philanthropist who wants to hedge a predictable outcome, such as CO2 emissions, should allocate ~5% (risk-adjusted) to mission hedging.
    • Under this model, a philanthropist who wants to hedge a more volatile outcome, for example AI progress, should allocate ~20% to mission hedging on a risk-adjusted basis.
  • If you can’t use leverage, then you shouldn’t mission hedge unless mission hedging looks especially compelling. Confidence: Likely. [More]
  • If you currently invest most of your money in a legacy investment that you’d like to reduce your exposure to, then it’s more important on the margin to seek high expected return than to mission hedge. Confidence: Likely. [More]
  • The optimal allocation to mission hedging is proportional to: (Confidence: Likely)
    1. the correlation between the hedge and the mission target being hedged;
    2. the standard deviation of the mission target;
    3. your degree of risk tolerance;
    4. the inverse of the standard deviation of the hedge.

Cross-posted to the Effective Altruism Forum.

Continue reading
Posted on

How I Estimate Future Investment Returns

To make informed investing decisions, I want to estimate the future expected return of my portfolio. Markets are unpredictable, and future returns will likely significantly deviate from estimates—AQR believes there’s a 50% chance that 10-year realized equity returns will differ from their predictions by more than 3% per year. Still, it’s helpful to come up with a median guess.

In this post, I explain the projections that I use for my own financial planning.

Last updated 2022-06-16.

Continue reading
Posted on

Page 3 of 13