Should Earners-to-Give Work at Startups Instead of Big Companies?

Summary

Confidence: Somewhat likely

Cross-posted to the Effective Altruism Forum.

Effective altruist earners-to-give might be able to donate more money if, instead of working at big companies for high salaries, they work at startups and get paid in equity. Startups are riskier than big companies, but EAs care less about risk than most people.

Working at a startup is easier than starting one. It doesn’t pay as well, but based on my research, it looks like EA startup employees can earn more than big company employees in expectation.

Does the optimal EA investment portfolio include a significant allocation to startups? To answer that question, I estimated the expected return and risk of startups by adding up the following considerations:

  1. Find a baseline of startup performance by looking at historical data on VC firm returns.
  2. VC performance is somewhat persistent. EAs can beat the average by working at startups that the top VC firms invest in.
  3. Startup employees get worse equity terms than VCs, but they also don’t have to pay management fees, and they get meta-options. Overall, employees come out looking better than VCs.
  4. Current market conditions suggest that future performance will be worse than past performance.
  5. Startups are much riskier than publicly-traded stocks, and the startup market is moderately correlated with stocks (r=0.7).

All things considered, my best guess is that more earners-to-give should consider working at startups.

Continue reading
Posted on

Obvious Investing Facts

Last updated 2021-11-22.

Many investors, even professionals, are ignorant about obvious facts that they really should know. When I say a fact is “obvious”, what I mean is that you can easily observe it by looking at widely-available data using simple statistical tools.

A list of obvious but underappreciated facts:

Fact 1. A single large-cap stock is about 2x as volatile as the total stock market. A small-cap stock is about 4x as volatile. It’s common knowledge that individual stocks are risky, but most people don’t know how to quantify the risk, and I believe they tend to underestimate it. I wrote a whole essay about this because I think it’s the most important underappreciated investing fact.

Continue reading
Posted on

Future Funding/Talent/Capacity Constraints Matter, Too

Last updated 2021-10-20.

People who talk about talent/funding/capacity constraints mostly talk about what’s the biggest constraint right now. But it also matters what the constraints will be later.

Right now, the EA community holds a lot of wealth—more wealth than it can productively spend in the next few years, at least on smaller cause areas such as AI safety, cause prioritization research, and wild animal welfare. Those newer fields need time to scale up so they can absorb more funding.

That doesn’t mean EAs should stop earning to give. Maybe most EAs could do more good this year with their direct efforts than with their donations. But perhaps 10 years from now, the smaller causes will have scaled up a lot, and they’ll be able to deploy much more money. Earners-to-give can invest their money for a while, and then deploy it once top causes develop enough spending capacity.

Continue reading
Posted on

Low-Hanging (Monetary) Fruit for Wealthy EAs

Confidence: Likely

Cross-posted to the Effective Altruism Forum.

Ordinary wealthy people don’t care as much about getting more money because they already have a lot of it. So we should expect to be able to find overlooked methods for rich people to get richer.1 Wealthy effective altruists might value their billionth dollar nearly as much as their first dollar, so they should seek out these overlooked methods.

If someone got rich doing X (where X = starting a startup, excelling at a high-paying profession, etc.), their best way of making money on the margin might not be to do more X. It might be to do something entirely different.

Some examples:

Sam Bankman-Fried increased his net worth by $10,000,000,000 in four years by founding FTX. He earned most of those zeroes by doing the hard work of starting a company, and there’s no shortcut around that. But, importantly, he managed to retain most of his original stake in FTX. For most founders, by the time their company is worth $10 billion or more, they only own maybe 10% of it. If Sam had given away a normal amount of equity to VCs, he might have only gotten $2 billion from FTX instead of $10 billion. In some sense, 80% of the money he earned from FTX came purely from retaining equity.2

Continue reading
Posted on

Do I Read My Own Citations?

It has often been said that scholars don’t read their own citations. Out of curiosity, I decided to go through one of my longer essays to see how many of my citations I read.

(I actually did this exercise a while ago, around the time I published the original essay. Today I was going through my personal journal and found my notes on the exercise, and I thought it might be worth sharing publicly.)

The results:

  • My essay cites a total of 37 academic papers.
  • For 9 citations, I read the entire thing top to bottom and took notes to help me remember.
  • For another 6 citations, I skimmed them but didn’t read carefully or take notes.
  • For the remaining 22, I only read the abstracts.
Continue reading
Posted on

Summaries Are Important

Every informative essay or research paper should include a summary at the beginning. Write your summary with the expectation that most readers will ONLY read the summary. The summary should tell most readers everything they need to know. The body of the article only exists to provide context and supporting evidence.

Continue reading
Posted on

My Experience Trying to Force Myself to Do Deep Work

Inspired by Applied Divinity Studies’ Unemployment Part 2

Many people, such as Cal Newport, say that you can only do about four hours of deep work per day. I am a lot worse at deep work than that.

When I worked full-time as a software developer, I tried pretty hard to avoid distractions and stay focused on work. At the end of each day, I made a quick estimate of how much I got done that day. I rated myself on a 5-point productivity scale. A fully productive day, where I spent the bulk of the day doing meaningful work, earned the full 5 points. My estimates were by no means objective, but according to my own perception, I scored 5 points on a total of 91 out of 602 work days (that’s 15%). A 5-point day usually meant I spent around four hours doing deep work, and most of the rest of the day doing important shallow work.

Continue reading
Posted on

Mission Hedgers Want to Hedge Quantity, Not Price

Summary: Mission hedgers want to hedge the quantity of a good, but can only directly hedge the price. As a motivating example, can we mission hedge climate change using oil futures or oil company stock? Based on a cursory empirical analysis, it appears that we can, and that oil stock makes for the better hedge. But this answer relies on some questionable data (either that, or my methodology is bad).

Introduction

The purpose of mission hedging is to earn more money in worlds where your money matters more. For instance, if you’re working to prevent climate change, you could buy stock in oil companies. In worlds where oil companies are more successful and climate change is worse, you make more money—at least in theory. But it might be hard in practice to find a good way to hedge climate change.

Let’s start with a simple example. Suppose you’re considering hedging climate change by buying oil futures. Does that work?

If people burn more oil, that directly contributes to climate change. You’d like to make money as the quantity of oil goes up. If you buy oil futures, you will make money as the price of oil goes up. The problem is, the quantity and price of oil aren’t necessarily related.

Before we talk about the economics of oil prices, let’s clarify some terminology.

Continue reading
Posted on

How Do AI Timelines Affect Giving Now vs. Later?

Cross-posted to the Effective Altruism Forum.

How do AI timelines affect the urgency of working on AI safety?

It seems plausible that, if artificial general intelligence (AGI) will arrive soon, then we need to spend quickly on AI safety research. And if AGI is still a way off, we can spend more slowly. Are these positions justified? If we have a bunch of capital and we’re deciding how quickly to spend it, do we care about AI timelines? Intuitively, it seems like the answer is yes. But is it possible to support this intuition with a mathematical model?

TLDR: Yes. Under plausible model assumptions, there is a direct relationship between AI timelines and how quickly we should spend on AI safety research.

Continue reading
Posted on

← Newer Page 1 of 10