# My Experience Trying to Force Myself to Do Deep Work

Inspired by Applied Divinity Studies’ Unemployment Part 2

Many people, such as Cal Newport, say that you can only do about four hours of deep work per day. I am a lot worse at deep work than that.

When I worked full-time as a software developer, I tried pretty hard to avoid distractions and stay focused on work. At the end of each day, I made a quick estimate of how much I got done that day. I rated myself on a 5-point productivity scale. A fully productive day, where I spent the bulk of the day doing meaningful work, earned the full 5 points. My estimates were by no means objective, but according to my own perception, I scored 5 points on a total of 91 out of 602 work days (that’s 15%). A 5-point day usually meant I spent around four hours doing deep work, and most of the rest of the day doing important shallow work.

When I failed to do sufficient deep work, it usually wasn’t because I spent too much time on meetings and emails (although that did happen occasionally). Most of the time, I was simply too dumb to get enough useful work done.

Most of my work required having insights. If I’m trying to debug a problem, figure out how to extend some bit of code, or plan out a new feature, I need to have a series of small ideas about how to make progress. For example, if I’m looking for the source of a bug, I might have the idea, “The problem might be in this particular function. I’ll step through the function and look for any anomalies.” On my less productive days, I would simply fail to generate any such ideas, preventing me from making any progress.

On my worst days (which happened about 15% of the time), I couldn’t manage even simple tasks. This was more about focus than about insight. It would usually go like this:

Me: Here’s the task that we need to do. As you can see, I’ve opened the relevant file and I’m currently looking at it. We need to do X to this file. Brain: I don’t want to do that. Me: Come on, you know exactly what to do, you should just do it. Brain: No. Can we do something fun instead? Me: We have to do this. Brain: Well I’m not gonna.

Since I refused to get distracted and my brain refused to do the actual work, I’d end up staring at the screen doing nothing for about an hour. Eventually I’d give up on that and go read Hacker News or think about investing—things that feel work-adjacent but don’t qualify as genuine work.1 After doing that for about 15 to 30 minutes, I’d feel guilty enough to go back to staring at my code while doing nothing.

## What predicts productivity?

I tracked a bunch of variables to see if anything could predict whether I’d have a good or bad day. I looked at sleep, how much I ate, the glycemic index of my lunch2, whether I went to the gym, and whether I had melatonin the previous night; none showed a statistically significant effect. The only thing that clearly mattered was whether I had caffeine. On a five-point scale, caffeine boosted by average productivity by about 0.7 points.

(I’m highly confident that I perform worse when I don’t get enough sleep. But my sleep schedule was consistent enough that I didn’t have a sufficiently large sample of sleep-deprived work days to show a visible effect.)

(I didn’t blind myself, so any positive results could be explained by a placebo effect. But I only got one positive result, and I wasn’t expecting to get it before I analyzed the data—subjectively, I felt about the same on caffeine days and non-caffeine days.)

About a year ago, when I started working full-time on independent research, the frequency of fully-productive days went up from 15% to 24% (53 out of 221). But median productivity declined, probably because I stopped trying to force myself to work when I didn’t want to.

My productivity tended to come in waves. I’d think of some interesting and digestible sub-problem, make a bunch of progress on it for a few days or weeks (depending on the size of the problem), figure out all the obvious stuff, and then lose steam and stop being productive for a while.

During waves of low productivity, the issue wasn’t that I didn’t know what to work on. I had plenty of ideas about important research areas. The problem was that I didn’t feel motivated to work on them. Usually, I’d only feel motivated to work on a problem shortly after I came up with it. Once the problem became stale, I’d lose motivation. But sometimes I’d get an unpredictable second wind on an old stale problem. It would be nice if I knew what caused those second winds, or if I could find a way to reliably trigger them. But so far I haven’t figured out how to do that.

# Notes

1. Some of the time spent on investing genuinely improved my life, and maybe even helped improve other people’s lives. But I also spent a lot of time doing obviously-useless things like checking the current value of my portfolio, or changing my hypothetical allocation to a particular strategy from 5% to 10% and then changing it back later.3

2. I didn’t have any precise way of measuring this. I just kept a list of common foods and their glycemic indexes, and then estimated the average glycemic index of my meal based on how much of each food I ate.

3. I say hypothetical because I have a rule that I only change my actual allocation if I consistently believe I should change it for a while—maybe a month or longer (depending on the scope of the change). I want to avoid what you might call “tinkering bias”—the desire to tinker with my allocation because tinkering is fun, not because I’m actually improving it. So I rarely change my actual investments.

Posted on

# Mission Hedgers Want to Hedge Quantity, Not Price

Summary: Mission hedgers want to hedge the quantity of a good, but can only directly hedge the price. As a motivating example, can we mission hedge climate change using oil futures or oil company stock? Based on a cursory empirical analysis, it appears that we can, and that oil stock makes for the better hedge. But this answer relies on some questionable data (either that, or my methodology is bad).

## Introduction

The purpose of mission hedging is to earn more money in worlds where your money matters more. For instance, if you’re working to prevent climate change, you could buy stock in oil companies. In worlds where oil companies are more successful and climate change is worse, you make more money—at least in theory. But it might be hard in practice to find a good way to hedge climate change.

Let’s start with a simple example. Suppose you’re considering hedging climate change by buying oil futures. Does that work?

If people burn more oil, that directly contributes to climate change. You’d like to make money as the quantity of oil goes up. If you buy oil futures, you will make money as the price of oil goes up. The problem is, the quantity and price of oil aren’t necessarily related.

Before we talk about the economics of oil prices, let’s clarify some terminology.

Posted on

# How Do AI Timelines Affect Giving Now vs. Later?

Cross-posted to the Effective Altruism Forum.

How do AI timelines affect the urgency of working on AI safety?

It seems plausible that, if artificial general intelligence (AGI) will arrive soon, then we need to spend quickly on AI safety research. And if AGI is still a way off, we can spend more slowly. Are these positions justified? If we have a bunch of capital and we’re deciding how quickly to spend it, do we care about AI timelines? Intuitively, it seems like the answer is yes. But is it possible to support this intuition with a mathematical model?

TLDR: Yes. Under plausible model assumptions, there is a direct relationship between AI timelines and how quickly we should spend on AI safety research.

Posted on

# Metaculus Questions Suggest Money Will Do More Good in the Future

Cross-posted to the Effective Altruism Forum.

In the giving now vs. later debate, a conventional argument in favor of giving now is that people become better off over time, so money spent later will do less good. But some have argued the opposite: as time passes, we learn more about how to do good, and therefore we should give later. (Or, alternatively, we should use our money now to try to accelerate the learning rate.)

Metaculus provides some evidence that the second argument is the correct one: money spent later will do more good than money spent now.

This evidence comes from two Metaculus questions:

A brief explanation for each of these and why they matter:

On question 1: As of July 2021, Metaculus gives a 30% probability that one of GiveWell’s 2019 top charities will be ranked as the most cost-effective charity in 2031. That means a 70% chance that the 2031 charity will *not** be one of the 2019 recommendations. This could happen for two reasons: either the 2019 recommended charities run out of room for more funding, or GiveWell finds a charity that’s better than any of the 2019 recommendations. This at least weakly suggests that Metaculus users expect GiveWell to improve its recommendations over time.

On question 2: Metaculus estimates that GiveWell’s top charity in 2031 will need to spend $430 per life saved equivalent (according to GiveWell’s own analysis). For comparison, in 2019, GiveWell estimated that its most cost-effective charity spends$592 per life saved equivalent. (These figures are adjusted for inflation.)

As with question 1, this does not unambiguously show that GiveWell top charities are expected to improve over time. Perhaps instead Metaculus expects GiveWell’s estimate is currently too pessimistic, and it will converge on the true answer by 2031. But the cost reduction could also happen because GiveWell top charities truly get more effective over time.

Some caveats:

1. These Metaculus answers only represent the opinions of forecasters, not any formal analysis. (Some forecasters may have incorporated formal analyses into their predictions.)
2. Neither question directly asks whether money spent in 2031 will do more good than money spent now. (I don’t know how to operationalize a direct question like that. Please tell me if you have any ideas.)
3. These questions only ask about GiveWell top charities. Even if GiveWell recommendations become more effective over time, the same might not be true for other cause areas.
Posted on

# Reverse-Engineering the Philanthropic Discount Rate

## Summary

• How much of our resources should we spend now, and how much should we invest for the future? The correct balance is largely determined by how much we discount the future. A higher discount rate means we should spend more now; a lower discount rate tells us to spend more later.
• In a previous essay, I directly estimated the philanthropic discount rate. Alternatively, We can reverse-engineer the philanthropic discount rate from typical investors’ discount rates if we know the difference between the two. [More]
• In theory, people invest differently depending on what discount rate they use. We can estimate the typical discount rate by looking at historical investment performance. But the results vary depending on what data we look at. [More]
• We can also look at surveys of experts’ beliefs on the discount rate, but it’s not clear how to interpret their answers. [More]
• Then we need to know the difference between the typical and philanthropic discount rates. But it’s difficult to say to what extent philanthropists and typical investors disagree. [More]
• Some additional details raise more concerns about the reliability of this methodology. [More]
• Ultimately, it looks like we cannot effectively reverse-engineer the philanthropic discount rate, even if we spend substantially more effort on the problem. But under some conditions, we prefer to give later as long as we discount at a lower rate than non-philanthropists, which means we don’t need to make precise estimates. [More]
Posted on

# How can we increase the frequency of rare insights?

In many contexts, progress largely comes not from incremental progress, but from sudden and unpredictable insights. This is true at many different levels of scope—from one person’s current project, to one person’s life’s work, to the aggregate output of an entire field. But we know almost nothing about what causes these insights or how to increase their frequency.

Posted on

# Investment Strategies for Donor-Advised Funds

A donor-advised fund (DAF) is an investment account that allows you to take a tax deduction now and give the money to charity later. When you give money to a DAF, you can deduct that money just as you would deduct a charitable contribution. The DAF invests the money tax-free until you are ready to donate it to charity. But DAFs only allow limited investment options. How can we best make use of a DAF to optimize expected investment performance?

Posted on

# A Comparison of Donor-Advised Fund Providers

A donor-advised fund (DAF) is an investment account that lets you take a tax deduction now and give the money to charity later. When you give money to a DAF, you can deduct that money just as you would deduct a charitable contribution. The DAF invests the money tax-free. At any time, you can direct the DAF to donate some or all of its holdings to the charity of your choice.

You can open a DAF through a donor-advised fund provider. A provider charges an administrative fee to invest your DAF and make donations in accordance with your recommendations.

For donors in the United States, which DAF provider is the best?

All of the big DAF providers offer similar features. For most people, it doesn’t really matter which one you choose.

• If you already have a DAF, you might as well keep using it.
• If you have a brokerage account at Fidelity, Schwab, or Vanguard, then the easiest thing to do is to open a DAF with your brokerage account. That way, you can manage all your investments in one place.

Otherwise, I believe Schwab Charitable is the best DAF provider for most people.

Even if all the major DAF providers are reasonably good, they do have their own strengths and weaknesses. In the rest of this post, let’s look at how they compare.

Posted on

# The True Cost of Leveraged ETFs

Under some circumstances, altruists might prefer to leverage their investments. The easiest way to get leverage is to buy leveraged ETFs. But leveraged ETFs charge high fees and incur other hidden costs. These costs vary substantially across different funds and across time, but on average, leveraged ETFs have historically had annual excess costs of about 2%, or around 1.5% on top of the expense ratio.

Given reasonable expectations for future returns, leveraged ETFs most likely have substantially higher arithmetic mean returns than their un-leveraged benchmarks. They also appear to have higher geometric mean returns than their benchmarks, but only by a small margin. Slightly more pessimistic estimates would find that adding leverage decreases geometric return.

Note: Many investors can get leverage more cheaply via other methods, such as margin loans or futures. Even if leveraged ETFs appear better than un-leveraged investments, other forms of leverage might be better still.

Disclaimer: I am not an investment advisor and this should not be taken as investment advice. This content is for informational purposes only. Please do your own research or seek professional advice and otherwise take reasonable precautions before making any significant investment decisions. Past performance is not a guarantee of future results. Any given portfolio results are hypothetical and do not represent returns achieved by an actual investor.