Future Funding/Talent/Capacity Constraints Matter, Too

Last updated 2021-10-20.

People who talk about talent/funding/capacity constraints mostly talk about what’s the biggest constraint right now. But it also matters what the constraints will be later.

Right now, the EA community holds a lot of wealth—more wealth than it can productively spend in the next few years, at least on smaller cause areas such as AI safety, cause prioritization research, and wild animal welfare. Those newer fields need time to scale up so they can absorb more funding.

That doesn’t mean EAs should stop earning to give. Maybe most EAs could do more good this year with their direct efforts than with their donations. But perhaps 10 years from now, the smaller causes will have scaled up a lot, and they’ll be able to deploy much more money. Earners-to-give can invest their money for a while, and then deploy it once top causes develop enough spending capacity.

Continue reading
Posted on

Low-Hanging (Monetary) Fruit for Wealthy EAs

Confidence: Likely.

Cross-posted to the Effective Altruism Forum.

Ordinary wealthy people don’t care as much about getting more money because they already have a lot of it. So we should expect to be able to find overlooked methods for rich people to get richer.1 Wealthy effective altruists might value their billionth dollar nearly as much as their first dollar, so they should seek out these overlooked methods.

If someone got rich doing X (where X = starting a startup, excelling at a high-paying profession, etc.), their best way of making money on the margin might not be to do more X. It might be to do something entirely different.

Some examples:

(Edit 2024-03-18: This paragraph did not age well…although the point about retaining equity is still valid.)

Sam Bankman-Fried increased his net worth by $10,000,000,000 in four years by founding FTX. He earned most of those zeroes by doing the hard work of starting a company, and there’s no shortcut around that. But, importantly, he managed to retain most of his original stake in FTX. For most founders, by the time their company is worth $10 billion or more, they only own maybe 10% of it. If Sam had given away a normal amount of equity to VCs, he might have only gotten $2 billion from FTX instead of $10 billion. In some sense, 80% of the money he earned from FTX came purely from retaining equity.2

Continue reading
Posted on

Do I Read My Own Citations?

It has often been said that scholars don’t read their own citations. Out of curiosity, I decided to go through one of my longer essays to see how many of my citations I read.

(I actually did this exercise a while ago, around the time I published the original essay. Today I was going through my personal journal and found my notes on the exercise, and I thought it might be worth sharing publicly.)

The results:

  • My essay cites a total of 37 academic papers.
  • For 9 citations, I read the entire thing top to bottom and took notes to help me remember.
  • For another 6 citations, I skimmed them but didn’t read carefully or take notes.
  • For the remaining 22, I only read the abstracts.
Continue reading
Posted on

Summaries Are Important

Every informative essay or research paper should include a summary at the beginning. Write your summary with the expectation that most readers will ONLY read the summary. The summary should tell most readers everything they need to know. The body of the article only exists to provide context and supporting evidence.

Continue reading
Posted on

My Experience Trying to Force Myself to Do Deep Work

Inspired by Applied Divinity Studies’ Unemployment Part 2

Many people, such as Cal Newport, say that you can only do about four hours of deep work per day. I am a lot worse at deep work than that.

When I worked full-time as a software developer, I tried pretty hard to avoid distractions and stay focused on work. At the end of each day, I made a quick estimate of how much I got done that day. I rated myself on a 5-point productivity scale. A fully productive day, where I spent the bulk of the day doing meaningful work, earned the full 5 points. My estimates were by no means objective, but according to my own perception, I scored 5 points on a total of 91 out of 602 work days (that’s 15%). A 5-point day usually meant I spent around four hours doing deep work, and most of the rest of the day doing important shallow work.

Continue reading
Posted on

Mission Hedgers Want to Hedge Quantity, Not Price

Summary: Mission hedgers want to hedge the quantity of a good, but can only directly hedge the price. As a motivating example, can we mission hedge climate change using oil futures or oil company stock? Based on a cursory empirical analysis, it appears that we can, and that oil stock makes for the better hedge. But this answer relies on some questionable data (either that, or my methodology is bad).

Introduction

The purpose of mission hedging is to earn more money in worlds where your money matters more. For instance, if you’re working to prevent climate change, you could buy stock in oil companies. In worlds where oil companies are more successful and climate change is worse, you make more money—at least in theory. But it might be hard in practice to find a good way to hedge climate change.

Let’s start with a simple example. Suppose you’re considering hedging climate change by buying oil futures. Does that work?

If people burn more oil, that directly contributes to climate change. You’d like to make money as the quantity of oil goes up. If you buy oil futures, you will make money as the price of oil goes up. The problem is, the quantity and price of oil aren’t necessarily related.

Before we talk about the economics of oil prices, let’s clarify some terminology.

Continue reading
Posted on

How Do AI Timelines Affect Giving Now vs. Later?

Cross-posted to the Effective Altruism Forum.

How do AI timelines affect the urgency of working on AI safety?

It seems plausible that, if artificial general intelligence (AGI) will arrive soon, then we need to spend quickly on AI safety research. And if AGI is still a way off, we can spend more slowly. Are these positions justified? If we have a bunch of capital and we’re deciding how quickly to spend it, do we care about AI timelines? Intuitively, it seems like the answer is yes. But is it possible to support this intuition with a mathematical model?

TLDR: Yes. Under plausible model assumptions, there is a direct relationship between AI timelines and how quickly we should spend on AI safety research.

Continue reading
Posted on

Metaculus Questions Suggest Money Will Do More Good in the Future

Cross-posted to the Effective Altruism Forum.

Update 2021-10-06: I believe I was overconfident in my original interpretations of these Metaculus questions. Some EA Forum commenters pointed out alternative interpretations of people’s answers that could allow us to draw orthogonal or opposite conclusions. For example, on question 1, Metaculus users might predict GiveWell’s top charities to drop off the list by 2031 not because better charities are discovered, but because current charities run out of room for more funding.

In the giving now vs. later debate, a conventional argument in favor of giving now is that people become better off over time, so money spent later will do less good. But some have argued the opposite: as time passes, we learn more about how to do good, and therefore we should give later. (Or, alternatively, we should use our money now to try to accelerate the learning rate.)

Metaculus provides some evidence that the second argument is the correct one: money spent later will do more good than money spent now.

This evidence comes from two Metaculus questions:

  1. Will one of GiveWell’s 2019 top charities be estimated as the most cost-effective charity in 2031?
  2. How much will GiveWell guess it will cost to get an outcome as good as saving a life, at the end of 2031?

A brief explanation for each of these and why they matter:

On question 1: As of July 2021, Metaculus gives a 30% probability that one of GiveWell’s 2019 top charities will be ranked as the most cost-effective charity in 2031. That means a 70% chance that the 2031 charity will *not** be one of the 2019 recommendations. This could happen for two reasons: either the 2019 recommended charities run out of room for more funding, or GiveWell finds a charity that’s better than any of the 2019 recommendations. This at least weakly suggests that Metaculus users expect GiveWell to improve its recommendations over time.

On question 2: Metaculus estimates that GiveWell’s top charity in 2031 will need to spend $430 per life saved equivalent (according to GiveWell’s own analysis). For comparison, in 2019, GiveWell estimated that its most cost-effective charity spends $592 per life saved equivalent. (These figures are adjusted for inflation.)

As with question 1, this does not unambiguously show that GiveWell top charities are expected to improve over time. Perhaps instead Metaculus expects GiveWell’s estimate is currently too pessimistic, and it will converge on the true answer by 2031. But the cost reduction could also happen because GiveWell top charities truly get more effective over time.

Some caveats:

  1. These Metaculus answers only represent the opinions of forecasters, not any formal analysis. (Some forecasters may have incorporated formal analyses into their predictions.)
  2. Neither question directly asks whether money spent in 2031 will do more good than money spent now. (I don’t know how to operationalize a direct question like that. Please tell me if you have any ideas.)
  3. These questions only ask about GiveWell top charities. Even if GiveWell recommendations become more effective over time, the same might not be true for other cause areas.
Posted on

Reverse-Engineering the Philanthropic Discount Rate

Summary

  • How much of our resources should we spend now, and how much should we invest for the future? The correct balance is largely determined by how much we discount the future. A higher discount rate means we should spend more now; a lower discount rate tells us to spend more later.
  • In a previous essay, I directly estimated the philanthropic discount rate. Alternatively, We can reverse-engineer the philanthropic discount rate from typical investors’ discount rates if we know the difference between the two. [More]
  • In theory, people invest differently depending on what discount rate they use. We can estimate the typical discount rate by looking at historical investment performance. But the results vary depending on what data we look at. [More]
  • We can also look at surveys of experts’ beliefs on the discount rate, but it’s not clear how to interpret their answers. [More]
  • Then we need to know the difference between the typical and philanthropic discount rates. But it’s difficult to say to what extent philanthropists and typical investors disagree. [More]
  • Some additional details raise more concerns about the reliability of this methodology. [More]
  • Ultimately, it looks like we cannot effectively reverse-engineer the philanthropic discount rate, even if we spend substantially more effort on the problem. But under some conditions, we prefer to give later as long as we discount at a lower rate than non-philanthropists, which means we don’t need to make precise estimates. [More]
Continue reading
Posted on

Page 3 of 12